arxiv_id
stringlengths 11
18
| title
stringlengths 10
217
| abstract
stringlengths 9
2.1k
| subjects
stringclasses 150
values | scrubbed_comments
stringlengths 2
1.33k
| category
stringclasses 1
value | __index_level_0__
int64 1
14.6k
|
---|---|---|---|---|---|---|
2308.14542v2 | On a set of some recent contributions to energy equality for the Navier-Stokes equations | In these notes we want, in addition to presenting some new results, to both clean up and refine some reflections on a couple of articles published on paper a few years ago, 2019-20. These papers concerned integral sufficient conditions on $u\,,$ $\n u\,,$ and mixed, to guarantee the equality of the energy, EE in the sequel, for solutions of the Navier-Stokes equations under the classical non-slip boundary condition. Concerning the $\n u$ case a crucial role was enjoyed by a previous well known Berselli and Chiodaroli's pioneering 2019 work on the subject. The above three papers are the main sources of these notes. References will be mostly concentrated on their direct relation to the above papers at the time of pubblication. More recent results will be not stated throughout the article. However, in the last section, the reader will be suitably sent to the more recent bibliography. Below, we also turn back to the innovative interpretation of some main parameters which allowed to overcome their apparent incongruence. Non-Newtonian fluids were also considered in our 2019 paper, maybe for the first time in the above Berselli-Chiodaroli's particular $\n u\,$ context. However we will stick mostly to the Newtonian case since in the end we come to the conclusion that there are not particular additional obstacles to extend the present results from Newtonian to non-Newtonian fluids. Hence we avoid to go further in this direction. | Analysis of PDEs (math.AP) | It could be a mistake, a wrong sentence may be. We need some time to be sure about that. Is a delicate point | factual/methodological/other critical errors in manuscript | 13,535 |
2308.14640v2 | New Bounds on Fuzzy Dark Matter from Galaxy-Galaxy Strong-Lensing Observations | Fuzzy Dark Matter (FDM) has recently gained attention as a motivated candidate for the dark matter (DM) content of the Universe, as opposed to the commonly assumed cold DM (CDM), since the soliton profile intrinsic to FDM models was found to be particularly well suited to reproduce observed galaxy mass profiles. While FDM as a single DM component has been strongly constrained by multiple probes, there remained a mass window between $10^{-25}\,\mathrm{eV}$ and $10^{-24}\,\mathrm{eV}$ in which it can comprise a large portion ($\gtrsim\mathcal{O}(10\%)$) of the total DM. In this work, we consider gravitational lensing measurements in the strong lensing regime, which are one of the only means to directly constrain the distribution and profile of DM in astronomical bodies. Using a simple model that combines a soliton FDM component with a Navarro-Frenk-White (NFW) profile, we explore under what conditions DM halos with this hybrid profile are able to reproduce the observed Einstein radii of several known lenses. We find that FDM with a particle mass of $\lesssim 10^{-24}\,{\rm eV}$ cannot explain the observations if it makes up more than $\sim10\%$ of the total DM, effectively closing the lingering FDM mass window. | Cosmology and Nongalactic Astrophysics (astro-ph.CO) | Constraints are likely overestimated due to issues with the numerical analysis. A revised version may be submitted if and when a reanalysis is completed. The originally submitted version of the paper is thus withdrawn until further notice | factual/methodological/other critical errors in manuscript | 13,536 |
2308.15054v6 | A Quasi-Polynomial Algorithm for Subset-Sum Problems with At Most One Solution | In this paper we study the problem of maximizing the distance to a given point over an intersection of balls. It was already known that this problem can be solved in polynomial time and space if the given point is not in the convex hull of the balls centers. The cases where the given point is in the convex hull of the balls centers include all NP-complete problems as we show. Some novel results are given in this area. A novel projection algorithm is developed then applied in the context of the Subset Sum Problem (SSP). Under the assumption that the SSP has at most one solution, we provide a quasi-polynomial algorithm, which decreases the radius of an initial ball containing the solution to the SSP. We perform some numerical tests which show the effectiveness of the proposed algorithm. | Optimization and Control (math.OC) | Some calculations turned out to be false, hence the speed of convergence is worse than advertised! | factual/methodological/other critical errors in manuscript | 13,540 |
2308.15511v2 | The pcf theory of non fixed points | We deal with values taken by various pseudopower functions at a singular cardinal that is not a fixed point of the aleph function. | Logic (math.LO) | There is a mistake in Proposition 3.4 | factual/methodological/other critical errors in manuscript | 13,541 |
2309.00736v4 | Prediction Error Estimation in Random Forests | In this paper, error estimates of classification Random Forests are quantitatively assessed. Based on the initial theoretical framework built by Bates et al. (2023), the true error rate and expected error rate are theoretically and empirically investigated in the context of a variety of error estimation methods common to Random Forests. We show that in the classification case, Random Forests' estimates of prediction error is closer on average to the true error rate instead of the average prediction error. This is opposite the findings of Bates et al. (2023) which are given for logistic regression. We further show that our result holds across different error estimation strategies such as cross-validation, bagging, and data splitting. | Machine Learning (stat.ML) | As we were working on revisions, we found a fatal flaw in the procedure. All of the results are problematic / wrong | factual/methodological/other critical errors in manuscript | 13,544 |
2309.00895v2 | The Dantzig Selector: Sparse Signals Recovery via l_p-q Minimization | In the paper, we proposed the Dantzig selector based on the $l_{p-q}$ ($0<p\leq1, 1<q\leq2$) minimization for the signal recovery. First, we establish the convex combination representation of sparse vectors under the $l_{p-q}$ minimization problem. Next, we give the signal recovery guarantees that based on two classes of restricted isometry property frames. Last, some graphical illustrations are presented for the sufficient conditions of the signal recovery. | Optimization and Control (math.OC) | There is error in Lemma 3.1 | factual/methodological/other critical errors in manuscript | 13,546 |
2309.01860v2 | Attention-Driven Multi-Modal Fusion: Enhancing Sign Language Recognition and Translation | In this paper, we devise a mechanism for the addition of multi-modal information with an existing pipeline for continuous sign language recognition and translation. In our procedure, we have incorporated optical flow information with RGB images to enrich the features with movement-related information. This work studies the feasibility of such modality inclusion using a cross-modal encoder. The plugin we have used is very lightweight and doesn't need to include a separate feature extractor for the new modality in an end-to-end manner. We have applied the changes in both sign language recognition and translation, improving the result in each case. We have evaluated the performance on the RWTH-PHOENIX-2014 dataset for sign language recognition and the RWTH-PHOENIX-2014T dataset for translation. On the recognition task, our approach reduced the WER by 0.9, and on the translation task, our approach increased most of the BLEU scores by ~0.6 on the test set. | Computer Vision and Pattern Recognition (cs.CV) | This version has some errors. Our schedule is packed, so we don't have enough time to correct it. We will share another work when we have time to fix this | factual/methodological/other critical errors in manuscript | 13,551 |
2309.02381v2 | Reynolds Averaged Solutions of the Navier-Stokes Equation | The mean of Young measure solutions for the Navier-Stokes equations with general initial conditions are PDE solutions of the Navier-Stokes equation of the class considered by Leray and Hopf. | Mathematical Physics (math-ph) | A critical error was found in one of the proofs | factual/methodological/other critical errors in manuscript | 13,553 |
2309.02911v2 | A Multimodal Learning Framework for Comprehensive 3D Mineral Prospectivity Modeling with Jointly Learned Structure-Fluid Relationships | This study presents a novel multimodal fusion model for three-dimensional mineral prospectivity mapping (3D MPM), effectively integrating structural and fluid information through a deep network architecture. Leveraging Convolutional Neural Networks (CNN) and Multilayer Perceptrons (MLP), the model employs canonical correlation analysis (CCA) to align and fuse multimodal features. Rigorous evaluation on the Jiaojia gold deposit dataset demonstrates the model's superior performance in distinguishing ore-bearing instances and predicting mineral prospectivity, outperforming other models in result analyses. Ablation studies further reveal the benefits of joint feature utilization and CCA incorporation. This research not only advances mineral prospectivity modeling but also highlights the pivotal role of data integration and feature alignment for enhanced exploration decision-making. | Machine Learning (cs.LG) | Upon careful review, it has come to our attention that inaccuracies exist in the formulation of the structure-fluid relationships, impacting the validity of the presented results | factual/methodological/other critical errors in manuscript | 13,558 |
2309.03137v2 | There are only two paradoxes | Using a graph representation of classical logic, the paper shows that the liar or Yablo pattern occurs in every semantic paradox. The core graph theoretic result generalizes theorem of Richardson, showing solvability of finite graphs without odd cycles, to arbitrary graphs which are proven solvable when no odd cycles nor patterns generalizing Yablo's occur. This follows from an earlier result by a new compactness-like theorem, holding for infinitary logic and utilizing the graph representation. | Logic (math.LO) | The proof of the main Theorem 3.7 has some flaws. A minor one, which can be fixed, is the choice of \mathcal{H} as "the set of all sinkless induced subgraphs of H having finitely many ends." A more serious one is that the proof could be used to establish an untrue claim. It is not clear yet what specific mistake causes it | factual/methodological/other critical errors in manuscript | 13,559 |
2309.03227v2 | Learning a Patent-Informed Biomedical Knowledge Graph Reveals Technological Potential of Drug Repositioning Candidates | Drug repositioning-a promising strategy for discovering new therapeutic uses for existing drugs-has been increasingly explored in the computational science literature using biomedical databases. However, the technological potential of drug repositioning candidates has often been overlooked. This study presents a novel protocol to comprehensively analyse various sources such as pharmaceutical patents and biomedical databases, and identify drug repositioning candidates with both technological potential and scientific evidence. To this end, first, we constructed a scientific biomedical knowledge graph (s-BKG) comprising relationships between drugs, diseases, and genes derived from biomedical databases. Our protocol involves identifying drugs that exhibit limited association with the target disease but are closely located in the s-BKG, as potential drug candidates. We constructed a patent-informed biomedical knowledge graph (p-BKG) by adding pharmaceutical patent information. Finally, we developed a graph embedding protocol to ascertain the structure of the p-BKG, thereby calculating the relevance scores of those candidates with target disease-related patents to evaluate their technological potential. Our case study on Alzheimer's disease demonstrates its efficacy and feasibility, while the quantitative outcomes and systematic methods are expected to bridge the gap between computational discoveries and successful market applications in drug repositioning research. | Artificial Intelligence (cs.AI) | We are sorry to withdraw this paper. We found some critical errors in the introduction and results sections. Specifically, we found that the first author have wrongly inserted citations on background works and he made mistakes in the graph embedding methods and relevant results are wrongly calculated. In this regard, we tried to revise this paper and withdraw the current version. Thank you | factual/methodological/other critical errors in manuscript | 13,563 |
2309.03661v2 | Prompt-based Context- and Domain-aware Pretraining for Vision and Language Navigation | With strong representation capabilities, pretrained vision-language models are widely used in vision and language navigation (VLN). However, most of them are trained on web-crawled general-purpose datasets, which incurs a considerable domain gap when used for VLN tasks. Another challenge for VLN is how the agent understands the contextual relations between actions on a trajectory and performs cross-modal alignment sequentially. In this paper, we propose a novel Prompt-bAsed coNtext- and Domain-Aware (PANDA) pretraining framework to address these problems. It performs prompting in two stages. In the domain-aware stage, we apply a low-cost prompt tuning paradigm to learn soft visual prompts from an in-domain dataset for equipping the pretrained models with object-level and scene-level cross-modal alignment in VLN tasks. Furthermore, in the context-aware stage, we design a set of hard context prompts to capture the sequence-level semantics and instill both out-of-context and contextual knowledge in the instruction into cross-modal representations. They enable further tuning of the pretrained models via contrastive learning. Experimental results on both R2R and REVERIE show the superiority of PANDA compared to previous state-of-the-art methods. | Computer Vision and Pattern Recognition (cs.CV) | the paper has some wrong,and we hope withdrawal it | factual/methodological/other critical errors in manuscript | 13,565 |
2309.04497v2 | Formal derivation of an inversion formula for the approximation of interface defects by means of active thermography | Thermal properties of a two-layered composite conductor are modified in case the interface is damaged. The present paper deals with nondestructive evaluation of perturbations of interface thermal conductance due to the presence of defects. The specimen is heated by means of a lamp system or a laser while its surface temperature is measured with an infrared camera in the typical framework of Active Thermography. Defects affecting the interface are evaluated using Laplace transformation and suitable symmetries of parabolic differential operators (reciprocity). | Mathematical Physics (math-ph) | the perturbative analysis in section 5 is meaningful only if the terms delta h and delta U are normalized. The correction is absolutely necessary. It implies that the evaluation of the constants (stability/instability) must be done again and carefully checked | factual/methodological/other critical errors in manuscript | 13,566 |
2309.05938v2 | Answering Subjective Induction Questions on Products by Summarizing Multi-sources Multi-viewpoints Knowledge | This paper proposes a new task in the field of Answering Subjective Induction Question on Products (SUBJPQA). The answer to this kind of question is non-unique, but can be interpreted from many perspectives. For example, the answer to 'whether the phone is heavy' has a variety of different viewpoints. A satisfied answer should be able to summarize these subjective opinions from multiple sources and provide objective knowledge, such as the weight of a phone. That is quite different from the traditional QA task, in which the answer to a factoid question is unique and can be found from a single data source. To address this new task, we propose a three-steps method. We first retrieve all answer-related clues from multiple knowledge sources on facts and opinions. The implicit commonsense facts are also collected to supplement the necessary but missing contexts. We then capture their relevance with the questions by interactive attention. Next, we design a reinforcement-based summarizer to aggregate all these knowledgeable clues. Based on a template-controlled decoder, we can output a comprehensive and multi-perspective answer. Due to the lack of a relevant evaluated benchmark set for the new task, we construct a large-scale dataset, named SupQA, consisting of 48,352 samples across 15 product domains. Evaluation results show the effectiveness of our approach. | Computation and Language (cs.CL) | 1. There are some errors in the data analysis table in the dataset SupQA, which needs to be corrected. 2. There is something wrong with the partial expression of the formula. 3. It will be resubmitted after modification | factual/methodological/other critical errors in manuscript | 13,570 |
2309.06909v2 | Intelligent Reflective Surface Assist Integrated Sensing and Wireless Power Transfer | Wireless sensing and wireless energy are enablers to pave the way for smart transportation and a greener future. In this paper, an intelligent reflecting surface (IRS) assisted integrated sensing and wireless power transfer (ISWPT) system is investigated, where the transmitter in transportation infrastructure networks sends signals to sense multiple targets and simultaneously to multiple energy harvesting devices (EHDs) to power them. In light of the performance tradeoff between energy harvesting and sensing, we propose to jointly optimize the system performance via optimizing the beamforming and IRS phase shift. However, the coupling of optimization variables makes the formulated problem non-convex. Thus, an alternative optimization approach is introduced and based on which two algorithms are proposed to solve the problem. Specifically, the first one involves a semi-definite program technique, while the second one features a low-complexity optimization algorithm based on successive convex approximation and majorization minimization. Our simulation results validate the proposed algorithms and demonstrate the advantages of using IRS to assist wireless power transfer in ISWPT systems. | Signal Processing (eess.SP) | Firstly,the simulation has some error and is needed to checked. Secondly, the authors relationship needs to be corrected between zheng li and zheng chu | factual/methodological/other critical errors in manuscript | 13,574 |
2309.09175v2 | Imbalanced Data Stream Classification using Dynamic Ensemble Selection | Modern streaming data categorization faces significant challenges from concept drift and class imbalanced data. This negatively impacts the output of the classifier, leading to improper classification. Furthermore, other factors such as the overlapping of multiple classes limit the extent of the correctness of the output. This work proposes a novel framework for integrating data pre-processing and dynamic ensemble selection, by formulating the classification framework for the nonstationary drifting imbalanced data stream, which employs the data pre-processing and dynamic ensemble selection techniques. The proposed framework was evaluated using six artificially generated data streams with differing imbalance ratios in combination with two different types of concept drifts. Each stream is composed of 200 chunks of 500 objects described by eight features and contains five concept drifts. Seven pre-processing techniques and two dynamic ensemble selection methods were considered. According to experimental results, data pre-processing combined with Dynamic Ensemble Selection techniques significantly delivers more accuracy when dealing with imbalanced data streams. | Machine Learning (cs.LG) | Made an error in the research and need to rectify it | factual/methodological/other critical errors in manuscript | 13,584 |
2309.09270v2 | Continuous Modeling of the Denoising Process for Speech Enhancement Based on Deep Learning | In this paper, we explore a continuous modeling approach for deep-learning-based speech enhancement, focusing on the denoising process. We use a state variable to indicate the denoising process. The starting state is noisy speech and the ending state is clean speech. The noise component in the state variable decreases with the change of the state index until the noise component is 0. During training, a UNet-like neural network learns to estimate every state variable sampled from the continuous denoising process. In testing, we introduce a controlling factor as an embedding, ranging from zero to one, to the neural network, allowing us to control the level of noise reduction. This approach enables controllable speech enhancement and is adaptable to various application scenarios. Experimental results indicate that preserving a small amount of noise in the clean target benefits speech enhancement, as evidenced by improvements in both objective speech measures and automatic speech recognition performance. | Audio and Speech Processing (eess.AS) | We found the results are got from some wrong experimental settings. We needs new experiments | factual/methodological/other critical errors in manuscript | 13,585 |
2309.09464v2 | Reducing Adversarial Training Cost with Gradient Approximation | Deep learning models have achieved state-of-the-art performances in various domains, while they are vulnerable to the inputs with well-crafted but small perturbations, which are named after adversarial examples (AEs). Among many strategies to improve the model robustness against AEs, Projected Gradient Descent (PGD) based adversarial training is one of the most effective methods. Unfortunately, the prohibitive computational overhead of generating strong enough AEs, due to the maximization of the loss function, sometimes makes the regular PGD adversarial training impractical when using larger and more complicated models. In this paper, we propose that the adversarial loss can be approximated by the partial sum of Taylor series. Furthermore, we approximate the gradient of adversarial loss and propose a new and efficient adversarial training method, adversarial training with gradient approximation (GAAT), to reduce the cost of building up robust models. Additionally, extensive experiments demonstrate that this efficiency improvement can be achieved without any or with very little loss in accuracy on natural and adversarial examples, which show that our proposed method saves up to 60\% of the training time with comparable model test accuracy on MNIST, CIFAR-10 and CIFAR-100 datasets. | Computer Vision and Pattern Recognition (cs.CV) | There are some issues of the experiments. Withraw this manuscript | factual/methodological/other critical errors in manuscript | 13,586 |
2309.11709v3 | Product states optimize quantum $p$-spin models for large $p$ | We consider the problem of estimating the maximal energy of quantum $p$-local spin glass random Hamiltonians, the quantum analogues of widely studied classical spin glass models. Denoting by $E^*(p)$ the (appropriately normalized) maximal energy in the limit of a large number of qubits $n$, we show that $E^*(p)$ approaches $\sqrt{2\log 6}$ as $p$ increases. This value is interpreted as the maximal energy of a much simpler so-called Random Energy Model, widely studied in the setting of classical spin glasses. Our most notable and (arguably) surprising result proves the existence of near-maximal energy states which are product states, and thus not entangled. Specifically, we prove that with high probability as $n\to\infty$, for any $E<E^*(p)$ there exists a product state with energy $\geq E$ at sufficiently large constant $p$. Even more surprisingly, this remains true even when restricting to tensor products of Pauli eigenstates. Our approximations go beyond what is known from monogamy-of-entanglement style arguments -- the best of which, in this normalization, achieve approximation error growing with $n$. Our results not only challenge prevailing beliefs in physics that extremely low-temperature states of random local Hamiltonians should exhibit non-negligible entanglement, but they also imply that classical algorithms can be just as effective as quantum algorithms in optimizing Hamiltonians with large locality -- though performing such optimization is still likely a hard problem. Our results are robust with respect to the choice of the randomness (disorder) and apply to the case of sparse random Hamiltonian using Lindeberg's interpolation method. The proof of the main result is obtained by estimating the expected trace of the associated partition function, and then matching its asymptotics with the extremal energy of product states using the second moment method. | Quantum Physics (quant-ph) | There is an error in the proof of the current draft with regards to the upper bound in Section 5. Consequently, the main result in this paper is not correct. A manuscript with new and corrected results will be uploaded as a separate arXiv document | factual/methodological/other critical errors in manuscript | 13,592 |
2309.11967v2 | Strong Converse Inequalities for Bernstein Operators via Krawtchouk Polynomials | We obtain strong converse inequalities for the Bernstein operators with explicit constants. One of the main ingredients in our approach is the representation of the derivatives of the Bernstein operators in terms of the orthogonal polynomials with respect to the binomial distribution, namely, the Krawtchouk polynomials | Classical Analysis and ODEs (math.CA) | We found an error in Lemma 4, namely, the factor m! is missed in formula (32). This implies that the results given in Section 4 are wrong, and therefore the proof of our main result is also wrong | factual/methodological/other critical errors in manuscript | 13,596 |
2309.12056v2 | BELT:Bootstrapping Electroencephalography-to-Language Decoding and Zero-Shot Sentiment Classification by Natural Language Supervision | This paper presents BELT, a novel model and learning framework for the pivotal topic of brain-to-language translation research. The translation from noninvasive brain signals into readable natural language has the potential to promote the application scenario as well as the development of brain-computer interfaces (BCI) as a whole. The critical problem in brain signal decoding or brain-to-language translation is the acquisition of semantically appropriate and discriminative EEG representation from a dataset of limited scale and quality. The proposed BELT method is a generic and efficient framework that bootstraps EEG representation learning using off-the-shelf large-scale pretrained language models (LMs). With a large LM's capacity for understanding semantic information and zero-shot generalization, BELT utilizes large LMs trained on Internet-scale datasets to bring significant improvements to the understanding of EEG signals. In particular, the BELT model is composed of a deep conformer encoder and a vector quantization encoder. Semantical EEG representation is achieved by a contrastive learning step that provides natural language supervision. We achieve state-of-the-art results on two featuring brain decoding tasks including the brain-to-language translation and zero-shot sentiment classification. Specifically, our model surpasses the baseline model on both tasks by 5.45% and over 10% and archives a 42.31% BLEU-1 score and 67.32% precision on the main evaluation metrics for translation and zero-shot sentiment classification respectively. | Artificial Intelligence (cs.AI) | We decided to redraw the manuscript because of the multi-error in the paper due to poor writing and inspection | factual/methodological/other critical errors in manuscript | 13,597 |
2309.12340v2 | Security for Children in the Digital Society -- A Rights-based and Research Ethics Approach | In this position paper, we present initial perspectives and research results from the project "SIKID - Security for Children in the Digital World." The project is situated in a German context with a focus on European frameworks for the development of Artificial Intelligence and the protection of children from security risks arising in the course of algorithm-mediated online communication. The project strengthens networks of relevant stakeholders, explores regulatory measures and informs policy makers, and develops a children's rights approach to questions of security for children online while also developing a research ethics approach for conducting research with children on online harms such as cybergrooming and sexual violence against children. | Computers and Society (cs.CY) | This version included false figures and technical difficulties made it difficult to replace the current version with another one that does not include the false figures | factual/methodological/other critical errors in manuscript | 13,600 |
2309.12481v2 | HANS, are you clever? Clever Hans Effect Analysis of Neural Systems | Instruction-tuned Large Language Models (It-LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively. In fact, several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models' abilities. However, earlier works are demonstrating the presence of inherent "order bias" in It-LLMs, posing challenges to the appropriate evaluation. In this paper, we investigate It-LLMs' resilience abilities towards a series of probing tests using four MCQ benchmarks. Introducing adversarial examples, we show a significant performance gap, mainly when varying the order of the choices, which reveals a selection bias and brings into discussion reasoning abilities. Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the It-LLMs, strengthened by including significant examples in few-shot scenarios. Finally, by using the Chain-of-Thought (CoT) technique, we elicit the model to reason and mitigate the bias by obtaining more robust models. | Computation and Language (cs.CL) | This paper contains erroneous evaluations and we would like to withdraw it | factual/methodological/other critical errors in manuscript | 13,601 |
2309.12556v2 | Relaxed optimal control for the stochastic Landau-Lifshitz-Gilbert equation | We consider the stochastic Landau-Lifshitz-Gilbert equation, perturbed by a real-valued Wiener process. We add an external control to the effective field as an attempt to drive the magnetization to a desired state and also to control thermal fluctuations. We use the theory of Young measures to relax the given control problem along with the associated cost. We consider a control operator that can depend (possibly non-linearly) on both the control and the associated solution. Moreover, we consider a fairly general associated cost functional without any special convexity assumption. We use certain compactness arguments, along with the Jakubowski version of the Skorohod Theorem to show that the relaxed problem admits an optimal control. | Optimization and Control (math.OC) | Possible error in Section 3 and Section 4 concerning Uniform estimates | factual/methodological/other critical errors in manuscript | 13,602 |
2309.12808v2 | ChatPRCS: A Personalized Support System for English Reading Comprehension based on ChatGPT | As a common approach to learning English, reading comprehension primarily entails reading articles and answering related questions. However, the complexity of designing effective exercises results in students encountering standardized questions, making it challenging to align with individualized learners' reading comprehension ability. By leveraging the advanced capabilities offered by large language models, exemplified by ChatGPT, this paper presents a novel personalized support system for reading comprehension, referred to as ChatPRCS, based on the Zone of Proximal Development theory. ChatPRCS employs methods including reading comprehension proficiency prediction, question generation, and automatic evaluation, among others, to enhance reading comprehension instruction. First, we develop a new algorithm that can predict learners' reading comprehension abilities using their historical data as the foundation for generating questions at an appropriate level of difficulty. Second, a series of new ChatGPT prompt patterns is proposed to address two key aspects of reading comprehension objectives: question generation, and automated evaluation. These patterns further improve the quality of generated questions. Finally, by integrating personalized ability and reading comprehension prompt patterns, ChatPRCS is systematically validated through experiments. Empirical results demonstrate that it provides learners with high-quality reading comprehension questions that are broadly aligned with expert-crafted questions at a statistical level. | Computation and Language (cs.CL) | We are very sorry, we found a problem in the article and will resubmit it after modification | factual/methodological/other critical errors in manuscript | 13,605 |
2309.14057v2 | Weakly Supervised Semantic Segmentation by Knowledge Graph Inference | Currently, existing efforts in Weakly Supervised Semantic Segmentation (WSSS) based on Convolutional Neural Networks (CNNs) have predominantly focused on enhancing the multi-label classification network stage, with limited attention given to the equally important downstream segmentation network. Furthermore, CNN-based local convolutions lack the ability to model the extensive inter-category dependencies. Therefore, this paper introduces a graph reasoning-based approach to enhance WSSS. The aim is to improve WSSS holistically by simultaneously enhancing both the multi-label classification and segmentation network stages. In the multi-label classification network segment, external knowledge is integrated, coupled with GCNs, to globally reason about inter-class dependencies. This encourages the network to uncover features in non-salient regions of images, thereby refining the completeness of generated pseudo-labels. In the segmentation network segment, the proposed Graph Reasoning Mapping (GRM) module is employed to leverage knowledge obtained from textual databases, facilitating contextual reasoning for class representation within image regions. This GRM module enhances feature representation in high-level semantics of the segmentation network's local convolutions, while dynamically learning semantic coherence for individual samples. Using solely image-level supervision, we have achieved state-of-the-art performance in WSSS on the PASCAL VOC 2012 and MS-COCO datasets. Extensive experimentation on both the multi-label classification and segmentation network stages underscores the effectiveness of the proposed graph reasoning approach for advancing WSSS. | Computer Vision and Pattern Recognition (cs.CV) | Our description in Chapter 3, Section 3.2 of the paper is too repetitive with the paper "Object detection meets knowledge graphs". There is an error in the description of formula (5) in Section 3.3. And a detailed reasoning process is required for formula (5). Therefore, we wish to request a retraction of the paper | factual/methodological/other critical errors in manuscript | 13,611 |
2309.14063v3 | Preferential Multi-Target Search in Indoor Environments using Semantic SLAM | In recent years, the demand for service robots capable of executing tasks beyond autonomous navigation has grown. In the future, service robots will be expected to perform complex tasks like 'Set table for dinner'. High-level tasks like these, require, among other capabilities, the ability to retrieve multiple targets. This paper delves into the challenge of locating multiple targets in an environment, termed 'Find my Objects.' We present a novel heuristic designed to facilitate robots in conducting a preferential search for multiple targets in indoor spaces. Our approach involves a Semantic SLAM framework that combines semantic object recognition with geometric data to generate a multi-layered map. We fuse the semantic maps with probabilistic priors for efficient inferencing. Recognizing the challenges introduced by obstacles that might obscure a navigation goal and render standard point-to-point navigation strategies less viable, our methodology offers resilience to such factors. Importantly, our method is adaptable to various object detectors, RGB-D SLAM techniques, and local navigation planners. We demonstrate the 'Find my Objects' task in real-world indoor environments, yielding quantitative results that attest to the effectiveness of our methodology. This strategy can be applied in scenarios where service robots need to locate, grasp, and transport objects, taking into account user preferences. For a brief summary, please refer to our video: this https URL | Robotics (cs.RO) | There are some errors in Fig. 7 that were previously missed. Specifically, some of the chart values were interchanged | factual/methodological/other critical errors in manuscript | 13,612 |
2309.15219v2 | Modules with commutative endomorphism rings | In this paper we review and study $R$-modules $M$ for which $S = End_R(M)$ is commutative. For this, we define the concept of center of modules which is a natural generalization of the center of rings. The properties of center of modules, submodules and direct products and also, modules whose bi-endomorphism rings are commutative, have been considered. More generally, we study the sequence of the endomorphism rings and define the endo-commutativity dimension of modules. | Commutative Algebra (math.AC) | It has some errors and false results | factual/methodological/other critical errors in manuscript | 13,614 |
2309.15765v2 | A continuous version of multiple zeta values with double variables | In this paper we define a continuous version of multiple zeta functions with double variables. They can be analytically continued to meromorphic functions on $\mathbb{C}^r$ with only simple poles at some special hyperplanes. The evaluations of these functions at positive integers (continuous multiple zeta values) satisfy the first shuffle product and the second shuffle product. We proved that the dimension of the $\mathbb{Q}-$linear spaces generated by continuous multiple zeta values with given weight are finite. By using a theorem of this http URL , we proved that continuous multiple zeta values include all cyclotomic multiple zeta values of level 2. We will give a detail analysis about the two different shuffle products. Furthermore, we will discuss the extension of the two different products, we proved a theorem about comparing the two different shuffle product, this is an analogy of Ihara-Kaneko-Zagier's comparison theorem in the case of continuous multiple zeta values. As an application, we will give a new method to proof some Ramanujan's identities. Finally, we will provide some conjectures. | Number Theory (math.NT) | The proof of Theorem 3.10 is incorrect | factual/methodological/other critical errors in manuscript | 13,616 |
2309.15957v5 | Drastic reduction of dynamic liquid-solid friction in supercooled glycerol | This study addresses the influence of internal liquid dynamics on liquid-solid friction. Taking advantage of the wide range of relaxation timescales in supercooled liquids, we use a tuning-fork-based AFM to measure the slippage of supercooled glycerol on mica at 30 kHz. We report a 2-order of magnitude increase of slippage with decreasing temperature by only 30°C. More importantly, as the bulk liquid dynamics are slowed with decreasing temperature, we report a sharp drop of the interfacial friction coefficient in contrast with the usual assumption of thermally activated interfacial dynamics. To rationalize this original behavior, we account for the contribution of solid fluctuations to liquid friction. We show that a minimalistic single phonon-branch model of the mica surface yields semi-quantitative agreement with our measurements. In this picture, the liquid's relaxation rate is the tuning knob between two friction regimes where the wall is seen either as a static corrugated potential or as a thermally fluctuating surface. Remarkably, this study bridges soft and hard condensed matter: hydrodynamic flow controlled by the solid's dynamical modes. | Soft Condensed Matter (cond-mat.soft) | Data analysis is modified | factual/methodological/other critical errors in manuscript | 13,618 |
2309.16078v2 | Quantitative constraint on the source contribution to the Galactic diffuse gamma rays detected by the Tibet air shower array | The fraction of the contribution from yet-unresolved gamma-ray sources in the Galactic diffuse gamma rays observed by the Tibet air shower array is an important key to interpreting recent multi-messenger observations. This paper shows a surprising fact: no Tibet diffuse events above 398TeV come from the gamma-ray sources newly detected above 100 TeV by LHAASO. Based on this observational fact, the contribution of sources unresolved by LHAASO to the Tibet diffuse events is estimated to be less than 31% above 398TeV with a 99% confidence level. Our result shows that unresolved sources make only a sub-dominant contribution to the Tibet diffuse events above 398 TeV and a large fraction of the events are truly a diffusive nature. | High Energy Astrophysical Phenomena (astro-ph.HE) | Calculations in Eq-(2) should have used the probability p that a Tibet diffuse gamma-ray event is of unresolved-source origin, but the parameter p in Eq-(2) is the probability that a gamma-ray event above 398 TeV from a source marginally detected by LHAASO is detected by the Tibet. This mistake leads to a wrong estimate of the limit on the source fraction to the Tibet diffuse gamma-ray events | factual/methodological/other critical errors in manuscript | 13,619 |
2309.16406v2 | Towards surgery with good quantum LDPC codes | We show that the good quantum LDPC codes of Panteleev-Kalachev \cite{PK} allow for surgery using any logical qubits, albeit incurring an asymptotic penalty which lowers the rate and distance scaling. We also prove that we can satisfy 3 of the 4 conditions for performing surgery \textit{without} incurring an asymptotic penalty. If the last condition is also satisfied then we can perform code surgery while maintaining $k, d\in \Theta(n)$. | Quantum Physics (quant-ph) | Critical error in the proof of Thm 4.5: a surjective chain map does not imply a surjective map between homology groups | factual/methodological/other critical errors in manuscript | 13,621 |
2310.00029v2 | Adversarial Driving Behavior Generation Incorporating Human Risk Cognition for Autonomous Vehicle Evaluation | Autonomous vehicle (AV) evaluation has been the subject of increased interest in recent years both in industry and in academia. This paper focuses on the development of a novel framework for generating adversarial driving behavior of background vehicle interfering against the AV to expose effective and rational risky events. Specifically, the adversarial behavior is learned by a reinforcement learning (RL) approach incorporated with the cumulative prospect theory (CPT) which allows representation of human risk cognition. Then, the extended version of deep deterministic policy gradient (DDPG) technique is proposed for training the adversarial policy while ensuring training stability as the CPT action-value function is leveraged. A comparative case study regarding the cut-in scenario is conducted on a high fidelity Hardware-in-the-Loop (HiL) platform and the results demonstrate the adversarial effectiveness to infer the weakness of the tested AV. | Artificial Intelligence (cs.AI) | We find there is expression error in III.A. A correction edition will be offered | factual/methodological/other critical errors in manuscript | 13,625 |
2310.00100v3 | Multilingual Natural Language Processing Model for Radiology Reports -- The Summary is all you need! | The impression section of a radiology report summarizes important radiology findings and plays a critical role in communicating these findings to physicians. However, the preparation of these summaries is time-consuming and error-prone for radiologists. Recently, numerous models for radiology report summarization have been developed. Nevertheless, there is currently no model that can summarize these reports in multiple languages. Such a model could greatly improve future research and the development of Deep Learning models that incorporate data from patients with different ethnic backgrounds. In this study, the generation of radiology impressions in different languages was automated by fine-tuning a model, publicly available, based on a multilingual text-to-text Transformer to summarize findings available in English, Portuguese, and German radiology reports. In a blind test, two board-certified radiologists indicated that for at least 70% of the system-generated summaries, the quality matched or exceeded the corresponding human-written summaries, suggesting substantial clinical reliability. Furthermore, this study showed that the multilingual model outperformed other models that specialized in summarizing radiology reports in only one language, as well as models that were not specifically designed for summarizing radiology reports, such as ChatGPT. | Computation and Language (cs.CL) | Problems with the model | factual/methodological/other critical errors in manuscript | 13,626 |
2310.00526v2 | Are Graph Neural Networks Optimal Approximation Algorithms? | In this work we design graph neural network architectures that can be used to obtain optimal approximation algorithms for a large class of combinatorial optimization problems using powerful algorithmic tools from semidefinite programming (SDP). Concretely, we prove that polynomial-sized message passing algorithms can represent the most powerful polynomial time algorithms for Max Constraint Satisfaction Problems assuming the Unique Games Conjecture. We leverage this result to construct efficient graph neural network architectures, OptGNN, that obtain high-quality approximate solutions on landmark combinatorial optimization problems such as Max Cut and maximum independent set. Our approach achieves strong empirical results across a wide range of real-world and synthetic datasets against both neural baselines and classical algorithms. Finally, we take advantage of OptGNN's ability to capture convex relaxations to design an algorithm for producing dual certificates of optimality (bounds on the optimal solution) from the learned embeddings of OptGNN. | Machine Learning (cs.LG) | Figure 1 pg 2. is inaccurate | factual/methodological/other critical errors in manuscript | 13,627 |
2310.01248v2 | Improving Emotional Expression and Cohesion in Image-Based Playlist Description and Music Topics: A Continuous Parameterization Approach | Text generation in image-based platforms, particularly for music-related content, requires precise control over text styles and the incorporation of emotional expression. However, existing approaches often need help to control the proportion of external factors in generated text and rely on discrete inputs, lacking continuous control conditions for desired text generation. This study proposes Continuous Parameterization for Controlled Text Generation (CPCTG) to overcome these limitations. Our approach leverages a Language Model (LM) as a style learner, integrating Semantic Cohesion (SC) and Emotional Expression Proportion (EEP) considerations. By enhancing the reward method and manipulating the CPCTG level, our experiments on playlist description and music topic generation tasks demonstrate significant improvements in ROUGE scores, indicating enhanced relevance and coherence in the generated text. | Computation and Language (cs.CL) | Becasue I find some important fourmulation need to change | factual/methodological/other critical errors in manuscript | 13,628 |
2310.01481v2 | Nonlinear acoustics and shock dynamics in isentropic atmospheres | Nonlinear acoustic evolution is often discussed in the context of wave-steepening that leads to shock formation, and is of special interest in applications where the shock continues to strengthen due to a narrowing of its channel or the stratification of the medium. Accurate scalings govern low amplitude waves and strong shocks, but connecting these phases, or describing waves that are nonlinear from the outset, generally requires simulation. We address this problem using the fact that waves within a plane-parallel, isentropic and gravitationally stratified atmosphere are described by exact simple-wave solutions, thanks to the conservation of Riemann invariants in a freely falling reference frame. Our solutions enable us to discriminate waves that reflect from those that form shocks, and to capture wave and shock evolution using an ordinary differential equation. For several relevant values of the adiabatic index $\gamma$ the solutions are explicit; furthermore, nonlinear wave reflection from a free surface can be described analytically for $\gamma=3$. Comparison to hydrodynamic simulations shows that our analytic shock approximation is accurate up to moderate ($\sim$ few--15) Mach numbers, where the accuracy increases with the adiabatic index. Our solutions also imply that an initially subsonic pulse is unable to unbind mass from the atmosphere without significantly increasing its entropy. | Fluid Dynamics (physics.flu-dyn) | Paper withdrawn due to a conceptual error regarding backward travelling perturbations (section 3). We are working on an update | factual/methodological/other critical errors in manuscript | 13,629 |
2310.01638v2 | Remark on the Global Wellposedness of the Periodic Mass Critical NLS | The traditional argument for global well-posedness of the mass critical NLS on $\mathbb{T}$ and $\mathbb{T}^2$ follows from bilinear estimates that, in a sense, match those used on $\mathbb{R}^d$. Such bilinear estimates generically follow for small time, or scaled tori for time $T=1$, but a deeper fact holds: the full gamut of Strichartz estimates used on $\mathbb{R}^d$ hold on $\mathbb{T}^d$ for time-scales that are much longer than the one that the traditional $I-$method utilize. We utilize this fact, together with scale invariant norms, to improve the $I-$method, yielding global well-posedness for small mass initial data when $s>1/4$ on $\mathbb{T}$. This improves results of De Silve, Pavlović, Staffilani, and Tzirakis, \cite{de2007global}. and Li, Wu, and Xu, \cite{li2011global}. | Analysis of PDEs (math.AP) | Flaw in proof | factual/methodological/other critical errors in manuscript | 13,631 |
2310.02849v2 | Understanding the Experiences of Neurodivergent Undergraduate Physicists Through Critical Disability Physics Identity | As more neurodivergent students enter college, discussion on neurodiversity in higher education is gaining momentum. It is increasingly imperative that we understand how neurodivergent individuals construct their identities within the context of physics programs. Using hte Critical Disability Physics Identity framework, this study offers a nuanced analysis of the lived experiences and identity development of neurodivergent undergraduate physicists through the lens of resource use and political agency. By examining the phenomenon of neurodivergent undergraduate physics identity, we uncover the multifaceted dimensions of their lived experiences, including the marginalization caused by a neurotypical-normative culture, the negotiation of conflicting definitions of being a physicist, and the influence of supportive networks. Our findings shed light on the intersectionality of neurodivergent identity and physics identity, offering valuable insights to educators, researchers, and institutions committed to fostering inclusive learning environments. Ultimately, this research contributes to the ongoing dialogue on diversity, equity, inclusion, and justice within physics and higher education as a whole. | Physics Education (physics.ed-ph) | Missing third author. Needs massive revisions and restructuring. Error in code connection dataset in results section, should be undergraduates only. Replacement version will not be available in a short amount of time | factual/methodological/other critical errors in manuscript | 13,633 |
2310.04217v2 | A Stochastic Game without Approximate Equilibria | A game has approximate equilibria if for every $\epsilon >0$ there is an $\epsilon$-equilibrium. We show that there is a stochastic game that lacks approximate equilibria. This game has finitely many players and actions, their payoffs are Borel measurable functions on the pathways of play, and all players have perfect knowledge of the past histories and the present state. | Functional Analysis (math.FA) | The proof is flawed | factual/methodological/other critical errors in manuscript | 13,640 |
2310.04385v2 | On the Torsion Congruence for Zeta Functions of Totally Real Fields | In this note, we study the special values for zeta functions of totally real fields using the Shintani's cone decomposition. We prove certain congruence between the special values for zeta functions under the prime degree field extension. This congruence implies the `torsion congruence' proved by Ritter-Weiss which is crucial in the proof of the noncommutative Iwasawa main conjecture for totally real fields. | Number Theory (math.NT) | There is a gap in the paper due to a mistake in Proposition 2.1. The elements chosen by Colmez satisfying the [REDACTED-NAME] do not generalize the whole unit groups. We are now trying to resolve this problem by considering the sign fundamental domain as in Charollois-Dasgupta-Greenberg ([REDACTED-NAME] Cocycles on GLn II: Shintani's method) | factual/methodological/other critical errors in manuscript | 13,641 |
2310.04706v3 | Offline Imitation Learning with Variational Counterfactual Reasoning | In offline Imitation Learning (IL), an agent aims to learn an optimal expert behavior policy without additional online environment interactions. However, in many real-world scenarios, such as robotics manipulation, the offline dataset is collected from suboptimal behaviors without rewards. Due to the scarce expert data, the agents usually suffer from simply memorizing poor trajectories and are vulnerable to the variations in the environments, lacking the capability of generalizing to new environments. To effectively remove spurious features that would otherwise bias the agent and hinder generalization, we propose a framework named \underline{O}ffline \underline{I}mitation \underline{L}earning with \underline{C}ounterfactual data \underline{A}ugmentation (OILCA). In particular, we leverage the identifiable variational autoencoder to generate \textit{counterfactual} samples. We theoretically analyze the counterfactual identification and the improvement of generalization. Moreover, we conduct extensive experiments to demonstrate that our approach significantly outperforms various baselines on both \textsc{DeepMind Control Suite} benchmark for in-distribution robustness and \textsc{CausalWorld} benchmark for out-of-distribution generalization. | Machine Learning (cs.LG) | This version has some errors and mistakes, especially theoretically. After a careful assessment, we think we need a long time to fix them, including revising the theory derivation in Sec. 3.2 and 4.3. and conducting some supplementary experiments to help validate the applicability of our method | factual/methodological/other critical errors in manuscript | 13,642 |
2310.05113v2 | Nonlinear Orbital and Spin Edelstein Effect in Centrosymmetric Metals | Nonlinear spintronics has significant attention as it combines nonlinear dynamics with spintronics, opening up new possibilities beyond linear responses. A recent theoretical work [Cong et al., Phys. Rev. Lett. 130, 166302 (2023)] predicts the nonlinear generation of spin density [nonlinear spin Edelstein effect (NSEE)] in centrosymmetric metals based on symmetry analysis combined with first principle calculation. However, its microscopic mechanism remains unclear. This paper focuses on the fundamental role of orbital degrees of freedom for the nonlinear generation in centrosymmetric systems. Using a combination of tight-binding model and density functional theory calculations, we demonstrate that nonlinear orbital density can arise independently of spin-orbit coupling. In contrast, spin density follows through spin-orbit coupling. We further elucidate the microscopic mechanism responsible for this phenomenon, which involves the NSEE induced by electric-field-induced orbital Rashba texture. We also find that the NSEE is significantly influenced by the energy level broadening and that most materials previously predicted to exhibit strong NSEE exhibit negligible NSEE for the energy level broadening of the order of room temperature but demonstrate substantial NSEE only when the energy level broadening is reduced. In addition, we also explore the potential applications of the nonlinear orbital and spin Edelstein effect for field-free switching. | Materials Science (cond-mat.mtrl-sci) | We have identified significant numerical errors in our calculations. We will make the necessary modifications and resubmit the work shortly | factual/methodological/other critical errors in manuscript | 13,646 |
2310.05991v2 | Enhancing Document-level Event Argument Extraction with Contextual Clues and Role Relevance | Document-level event argument extraction poses new challenges of long input and cross-sentence inference compared to its sentence-level counterpart. However, most prior works focus on capturing the relations between candidate arguments and the event trigger in each event, ignoring two crucial points: a) non-argument contextual clue information; b) the relevance among argument roles. In this paper, we propose a SCPRG (Span-trigger-based Contextual Pooling and latent Role Guidance) model, which contains two novel and effective modules for the above problem. The Span-Trigger-based Contextual Pooling(STCP) adaptively selects and aggregates the information of non-argument clue words based on the context attention weights of specific argument-trigger pairs from pre-trained model. The Role-based Latent Information Guidance (RLIG) module constructs latent role representations, makes them interact through role-interactive encoding to capture semantic relevance, and merges them into candidate arguments. Both STCP and RLIG introduce no more than 1% new parameters compared with the base model and can be easily applied to other event extraction models, which are compact and transplantable. Experiments on two public datasets show that our SCPRG outperforms previous state-of-the-art methods, with 1.13 F1 and 2.64 F1 improvements on RAMS and WikiEvents respectively. Further analyses illustrate the interpretability of our model. | Computation and Language (cs.CL) | There are some mistakes in the paper | factual/methodological/other critical errors in manuscript | 13,650 |
2310.09453v2 | Effects of Same-Race Mentorship Preferences on Academic Performance and Survival | Same-race mentorship preference refers to mentors or mentees forming connections significantly influenced by a shared race. Although racial diversity in science has been well-studied and linked to favorable outcomes, the extent and effects of same-race mentorship preferences remain largely underexplored. Here, we analyze 465,355 mentor-mentee pairs from more than 60 research areas over the last 70 years to investigate the effect of same-race mentorship preferences on mentees' academic performance and survival. We use causal inference and statistical matching to measure same-race mentorship preferences while accounting for racial demographic variations across institutions, time periods, and research fields. Our findings reveal a pervasive same-race mentorship propensity across races, fields, and universities of varying research intensity. We observe an increase in same-race mentorship propensity over the years, further reinforced inter-generationally within a mentorship lineage. This propensity is more pronounced for minorities (Asians, Blacks, and Hispanics). Our results reveal that mentees under the supervision of mentors with high same-race propensity experience significantly lower productivity, impact, and collaboration reach during and after training, ultimately leading to a 27.6% reduced likelihood of remaining in academia. In contrast, a mentorship approach devoid of racial propensity appears to offer the best prospects for academic performance and persistence. These findings underscore the importance of mentorship diversity for academic success and shed light on factors contributing to minority underrepresentation in science. | Social and Information Networks (cs.SI) | 1. After further evaluating the race prediction method, we observed unsatisfactory accuracy and F1 scores. The study's findings could be impacted by these subpar predictions. 2. Our study incorporates both US and non-US samples, revealing that non-US samples may introduce outliers and distort the results. We recognize that the study's findings and conclusions might be affected by data quality | factual/methodological/other critical errors in manuscript | 13,659 |
2310.09541v2 | Poissonian pair correlation for higher dimensional real sequences | In this article, we examine the Poissonian pair correlation (PPC) statistic for higher-dimensional real sequences. Specifically, we demonstrate that for $d\geq 3$, for almost all $(\alpha_1,\ldots,\alpha_d) \in \mathbb{R}^d$, the sequence $\big(\{x_n\alpha_1\},\dots,\{x_n\alpha_d\}\big)$ in $[0,1)^d$ has PPC conditionally on the additive energy bound of $(x_n).$ This bound is more relaxed compared to the additive energy bound for one dimension as discussed in [1]. We also establish the metric PPC for $(n^{\theta_1},\ldots,n^{\theta_d})$ provided that the $\theta_i$'s are greater than one. More generally, we derive the PPC for $\big(\{x_n^{(1)}\alpha_1\},\dots,\{x_n^{(d)}\alpha_d\}\big) \in [0,1)^d$ for almost all $(\alpha_1,\ldots,\alpha_d) \in \mathbb{R}^d.$ | Number Theory (math.NT) | In the proof of Theorem 2.1, $f''(n,\boldsymbol{x})\asymp\frac{|x_1|+\cdots+|x_d|}{N^2}$ is used. This is an upper bound, not a lower bound because $x_i$s' can take negative values. Therefore, there is a gap in the proof | factual/methodological/other critical errors in manuscript | 13,660 |
2310.10698v2 | Bridging Code Semantic and LLMs: Semantic Chain-of-Thought Prompting for Code Generation | Large language models (LLMs) have showcased remarkable prowess in code generation. However, automated code generation is still challenging since it requires a high-level semantic mapping between natural language requirements and codes. Most existing LLMs-based approaches for code generation rely on decoder-only causal language models often treate codes merely as plain text tokens, i.e., feeding the requirements as a prompt input, and outputing code as flat sequence of tokens, potentially missing the rich semantic features inherent in source code. To bridge this gap, this paper proposes the "Semantic Chain-of-Thought" approach to intruduce semantic information of code, named SeCoT. Our motivation is that the semantic information of the source code (\eg data flow and control flow) describes more precise program execution behavior, intention and function. By guiding LLM consider and integrate semantic information, we can achieve a more granular understanding and representation of code, enhancing code generation accuracy. Meanwhile, while traditional techniques leveraging such semantic information require complex static or dynamic code analysis to obtain features such as data flow and control flow, SeCoT demonstrates that this process can be fully automated via the intrinsic capabilities of LLMs (i.e., in-context learning), while being generalizable and applicable to challenging domains. While SeCoT can be applied with different LLMs, this paper focuses on the powerful GPT-style models: ChatGPT(close-source model) and WizardCoder(open-source model). The experimental study on three popular DL benchmarks (i.e., HumanEval, HumanEval-ET and MBPP) shows that SeCoT can achieves state-of-the-art performance, greatly improving the potential for large models and code generation. | Computation and Language (cs.CL) | There may be calculation errors in Table 4 of the paper. We need time to verify and supplement, so the manuscript needs to be withdrawn. Thanks! | factual/methodological/other critical errors in manuscript | 13,664 |
2310.11730v2 | Federated Heterogeneous Graph Neural Network for Privacy-preserving Recommendation | Heterogeneous information network (HIN), which contains rich semantics depicted by meta-paths, has become a powerful tool to alleviate data sparsity in recommender systems. Existing HIN-based recommendations hold the data centralized storage assumption and conduct centralized model training. However, the real-world data is often stored in a distributed manner for privacy concerns, resulting in the failure of centralized HIN-based recommendations. In this paper, we suggest the HIN is partitioned into private HINs stored in the client side and shared HINs in the server. Following this setting, we propose a federated heterogeneous graph neural network (FedHGNN) based framework, which can collaboratively train a recommendation model on distributed HINs without leaking user privacy. Specifically, we first formalize the privacy definition in the light of differential privacy for HIN-based federated recommendation, which aims to protect user-item interactions of private HIN as well as user's high-order patterns from shared HINs. To recover the broken meta-path based semantics caused by distributed data storage and satisfy the proposed privacy, we elaborately design a semantic-preserving user interactions publishing method, which locally perturbs user's high-order patterns as well as related user-item interactions for publishing. After that, we propose a HGNN model for recommendation, which conducts node- and semantic-level aggregations to capture recovered semantics. Extensive experiments on three datasets demonstrate our model outperforms existing methods by a large margin (up to 34% in HR@10 and 42% in NDCG@10) under an acceptable privacy budget. | Machine Learning (cs.LG) | some mistakes | factual/methodological/other critical errors in manuscript | 13,667 |
2310.11874v2 | Some derivations among Logarithmic Space Bounded Counting Classes | In this paper we show derivations among logarithmic space bounded counting classes based on closure properties of $\#L$ that leads us to the result that $NL=PL=C_=L$. | Computational Complexity (cs.CC) | Expert comments reveal error in the main result claimed by the authors | factual/methodological/other critical errors in manuscript | 13,668 |
2310.12327v2 | Representation type of cyclotomic quiver Hecke algebras of type $C^{(1)}_{\ell}$ | We first investigate a connected quiver consisting of all dominant maximal weights for an integrable highest weight module in affine type C. This quiver provides an efficient method to obtain all dominant maximal weights. Then, we completely determine the representation type of cyclotomic Khovanov-Lauda-Rouquier algebras of arbitrary level in affine type C, by using the quiver we construct. We also determine the Morita equivalence classes and graded decomposition matrices of certain representation-finite and tame cyclotomic KLR algebras. | Representation Theory (math.RT) | Some mistakes appear in the last section and some overlap of texts appear in the abstract and the first two sections | factual/methodological/other critical errors in manuscript | 13,669 |
2310.12703v2 | Totally real cubic numbers are well approximable | In this paper we prove that all irrational numbers from totally real cubic number fields are well approximable by rationals (i.e. the partial quotients in the continued fraction expansion of such a number are unbounded). This settles the long standing open question of whether or not well approximable algebraic numbers exist. Our proof uses a number theoretic classification of approximations to algebraic numbers, together with a result of Lindenstrauss and Weiss which is an application of Ratner's orbit closure theorem. | Number Theory (math.NT) | [REDACTED-NAME] pointed out an error in the last displayed equation: the claim that it is a subset of A_uR_T is not true. u_1 can fluctuate by a multiplicative factor of up to kappa. This forces the last part of the argument to sample a geometric subsequence of the unipotent flow. We acknowledge this is a non-trivial gap, and withdraw our claim of having solved this problem | factual/methodological/other critical errors in manuscript | 13,671 |
2310.12876v2 | Statistical Process Monitoring of Isolated and Persistent Defects in Complex Geometrical Shapes | Traditional Statistical Process Control methodologies face several challenges when monitoring defects in complex geometries, such as those of products obtained via Additive Manufacturing techniques. Many approaches cannot be applied in these settings due to the high dimensionality of the data and the lack of parametric and distributional assumptions on the object shapes. Motivated by a case study involving the monitoring of egg-shaped trabecular structures, we investigate two recently-proposed methodologies to detect deviations from the nominal IC model caused by excess or lack of material. Our study focuses on the detection of both isolated large changes in the geometric structure, as well as persistent small deviations. We compare the approach of Scimone et al. (2022) with Zhao and del Castillo (2021) for monitoring defects in a small Phase I sample of 3D-printed objects. While the former control chart is able to detect large defects, the latter allows the detection of nonconforming objects with persistent small defects. Furthermore, we address the fundamental issue of selecting the number of eigenvalues to be monitored in Zhao and del Castillo's method by proposing a dimensionality reduction technique based on kernel principal components. This approach is shown to provide a good detection capability even when considering a large number of eigenvalues. By leveraging the sensitivity of the two monitoring schemes to different magnitudes of nonconformities, we also propose a novel joint monitoring scheme that is capable of identifying both types of defects in the considered case study. Computer code in R and Matlab that implements these methods and replicates the results is available as part of the supplementary material. | Applications (stat.AP) | We identified some issues in applying the LB method to the data in the study, which impacted the accuracy and reliability of the results. We sincerely thank [REDACTED-NAME] del Castillo and [REDACTED-NAME] for their valuable comments and insights, which helped identifying these issues. We appreciate their contributions and are currently in the process of revising and improving the manuscript | factual/methodological/other critical errors in manuscript | 13,674 |
2310.13959v2 | Bi-discriminator Domain Adversarial Neural Networks with Class-Level Gradient Alignment | Unsupervised domain adaptation aims to transfer rich knowledge from the annotated source domain to the unlabeled target domain with the same label space. One prevalent solution is the bi-discriminator domain adversarial network, which strives to identify target domain samples outside the support of the source domain distribution and enforces their classification to be consistent on both discriminators. Despite being effective, agnostic accuracy and overconfident estimation for out-of-distribution samples hinder its further performance improvement. To address the above challenges, we propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment, i.e. BACG. BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions. Specifically, for accuracy-awareness, we first design an optimizable nearest neighbor algorithm to obtain pseudo-labels of samples in the target domain, and then enforce the backward gradient approximation of the two discriminators at the class level. Furthermore, following evidential learning theory, we transform the traditional softmax-based optimization method into a Multinomial Dirichlet hierarchical model to infer the class probability distribution as well as samples uncertainty, thereby alleviating misestimation of out-of-distribution samples and guaranteeing high-quality classes alignment. In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process at the cost of a minor decrease in accuracy. Extensive experiments and detailed theoretical analysis on four benchmark data sets validate the effectiveness and robustness of our algorithm. | Computer Vision and Pattern Recognition (cs.CV) | [REDACTED-NAME] citation. For instance, Figure 8 hasn't been cited | factual/methodological/other critical errors in manuscript | 13,678 |
2310.16179v2 | Nonlinear theory remedies the lack of invertibility in time periodic fluid flows | This paper provides a framework to strong time periodic solutions of quasilinear evolution equations. The novelty of this approach is that zero is allowed to be a spectral value of the underlying linearized operator. This approach is then applied to time periodic problems associated to the Navier-Stokes equations, generalized Newtonian fluids, quasilinear reaction-diffusion systems as well as to magneto-hydrodynamics. | Analysis of PDEs (math.AP) | Proof of abstract result to be fixed | factual/methodological/other critical errors in manuscript | 13,682 |
2310.17306v2 | FormaT5: Abstention and Examples for Conditional Table Formatting with Natural Language | Formatting is an important property in tables for visualization, presentation, and analysis. Spreadsheet software allows users to automatically format their tables by writing data-dependent conditional formatting (CF) rules. Writing such rules is often challenging for users as it requires them to understand and implement the underlying logic. We present FormaT5, a transformer-based model that can generate a CF rule given the target table and a natural language description of the desired formatting logic. We find that user descriptions for these tasks are often under-specified or ambiguous, making it harder for code generation systems to accurately learn the desired rule in a single step. To tackle this problem of under-specification and minimise argument errors, FormaT5 learns to predict placeholders though an abstention objective. These placeholders can then be filled by a second model or, when examples of rows that should be formatted are available, by a programming-by-example system. To evaluate FormaT5 on diverse and real scenarios, we create an extensive benchmark of 1053 CF tasks, containing real-world descriptions collected from four different sources. We release our benchmarks to encourage research in this area. Abstention and filling allow FormaT5 to outperform 8 different neural approaches on our benchmarks, both with and without examples. Our results illustrate the value of building domain-specific learning systems. | Artificial Intelligence (cs.AI) | Contains inappropriately sourced conjecture of OpenAI's ChatGPT parameter count from this http URL, a citation which was omitted. The authors do not have direct knowledge or verification of this information, and relied solely on this article, which may lead to public confusion | factual/methodological/other critical errors in manuscript | 13,686 |
2310.17680v2 | CodeFusion: A Pre-trained Diffusion Model for Code Generation | Imagine a developer who can only change their last line of code, how often would they have to start writing a function from scratch before it is correct? Auto-regressive models for code generation from natural language have a similar limitation: they do not easily allow reconsidering earlier tokens generated. We introduce CodeFusion, a pre-trained diffusion code generation model that addresses this limitation by iteratively denoising a complete program conditioned on the encoded natural language. We evaluate CodeFusion on the task of natural language to code generation for Bash, Python, and Microsoft Excel conditional formatting (CF) rules. Experiments show that CodeFusion (75M parameters) performs on par with state-of-the-art auto-regressive systems (350M-175B parameters) in top-1 accuracy and outperforms them in top-3 and top-5 accuracy due to its better balance in diversity versus quality. | Software Engineering (cs.SE) | Contains inappropriately sourced conjecture of OpenAI's ChatGPT parameter count from this http URL, a citation which was omitted. The authors do not have direct knowledge or verification of this information, and relied solely on this article, which may lead to public confusion | factual/methodological/other critical errors in manuscript | 13,687 |
2310.18095v2 | A Comparative Study between the Crystalline Structures of Rock Crystal and Citrine Crystal Using the X-ray Diffraction Technique | In this work, the X-ray diffraction technique is used to investigate and highlight the similarities and differences in contents and crystal structure of rock crystal and citrine crystal. Both samples were prepared by mechanical milling for 30 seconds until each one became a homogeneous powder. The experimental X-ray patterns of rock crystal and citrine crystal were smoothed and then analyzed using a standard pattern. From this pattern, the unit cell parameters, space group, and atomic positional coordinates were determined. It was noted that the rock crystal and the citrine crystal have the same chemical formula; they belong to the quartz family and consist of silicon dioxide (SiO2), the same crystal structure (hexagonal), and the same space group (P3121) and the number of atoms in the unit cell (Z=3), while the space group numbers are 152 in the rock crystal and 154 in the citrine crystal, a very slight difference was found in the lattice parameters, the atomic density, and the volume of a unit cell between the two samples. This difference is due to the presence of trace impurities in the crystal lattice of citrine, such as anorthite (CaAl SiO) and muscovite (KAl). This study provides valuable insights into the crystallographic properties of these two quartz varieties and contributes to our understanding of their formation and properties. | Geophysics (physics.geo-ph) | There is some errors | factual/methodological/other critical errors in manuscript | 13,690 |
2310.18331v2 | AllTogether: Investigating the Efficacy of Spliced Prompt for Web Navigation using Large Language Models | Large Language Models (LLMs) have emerged as promising agents for web navigation tasks, interpreting objectives and interacting with web pages. However, the efficiency of spliced prompts for such tasks remains underexplored. We introduces AllTogether, a standardized prompt template that enhances task context representation, thereby improving LLMs' performance in HTML-based web navigation. We evaluate the efficacy of this approach through prompt learning and instruction finetuning based on open-source Llama-2 and API-accessible GPT models. Our results reveal that models like GPT-4 outperform smaller models in web navigation tasks. Additionally, we find that the length of HTML snippet and history trajectory significantly influence performance, and prior step-by-step instructions prove less effective than real-time environmental feedback. Overall, we believe our work provides valuable insights for future research in LLM-driven web agents. | Computation and Language (cs.CL) | Include wrong information in comment. Should be 7 pages and not published yet | factual/methodological/other critical errors in manuscript | 13,693 |
2310.18451v2 | Fusion of the Power from Citations: Enhance your Influence by Integrating Information from References | Influence prediction plays a crucial role in the academic community. The amount of scholars' influence determines whether their work will be accepted by others. Most existing research focuses on predicting one paper's citation count after a period or identifying the most influential papers among the massive candidates, without concentrating on an individual paper's negative or positive impact on its authors. Thus, this study aims to formulate the prediction problem to identify whether one paper can increase scholars' influence or not, which can provide feedback to the authors before they publish their papers. First, we presented the self-adapted ACC (Average Annual Citation Counts) metric to measure authors' impact yearly based on their annual published papers, paper citation counts, and contributions in each paper. Then, we proposed the RD-GAT (Reference-Depth Graph Attention Network) model to integrate heterogeneous graph information from different depth of references by assigning attention coefficients on them. Experiments on AMiner dataset demonstrated that the proposed ACC metrics could represent the authors influence effectively, and the RD-GAT model is more efficiently on the academic citation network, and have stronger robustness against the overfitting problem compared with the baseline models. By applying the framework in this work, scholars can identify whether their papers can improve their influence in the future. | Computers and Society (cs.CY) | There is a problem in section 3 | factual/methodological/other critical errors in manuscript | 13,694 |
2310.18848v2 | Optimal concentration level of anisotropic Trudinger-Moser functionals on any bounded domain | Let $F$ be convex and homogeneous of degree $1$, its polar $F^{o}$ represent a finsler metric on $\mathbb{R}^{n}$, and $\Omega$ be any bounded open set in $\mathbb{R}^{n}$. In this paper, we first construct the theoretical structure of anisotropic harmonic transplantation. Using the anisotropic harmonic transplantation, co-area formula, limiting Sobolev approximation method, delicate estimate of level set of Green function, we investigate the optimal concentration level of the Trudinger-Moser functional \[ \int_{\Omega}e^{\lambda_{n}|u|^{\frac{n}{n-1}}}dx \] under the anisotropic Dirichlet norm constraint $\int_{\Omega}F^{n}\left( \nabla{u}\right) dx\leq1$, where $\lambda_{n}=n^{\frac{n}{n-1}}\kappa _{n}^{\frac{1}{n-1}}\ $ denotes the sharp constant of anisotropic Trudinger-Moser inequality in bounded domain and $\kappa_{n}$ is the Lebesgue measure of the unit Wulff ball. As an application. we can immediately deduce the existence of extremals for anisotropic Trudinger-Moser inequality on bounded domain. Finally, we also consider the optimal concentration level of the anisotropic singular Trudinger-Moser functional. The method is based on the limiting Hardy-Sobolev approximation method and constructing a suitable normalized anisotropic concentrating sequence. | Analysis of PDEs (math.AP) | When using the polynomial approximation functional to prove the optimal concentration upper bound of the Trudinger-Moser inequality, we can not show that the order of limits can be exchanged | factual/methodological/other critical errors in manuscript | 13,696 |
2310.19661v2 | Classification of the anyon sectors of Kitaev's quantum double model | We give a complete classification of the anyon sectors of Kitaev's quantum double model on the infinite triangular lattice and for finite gauge group $G$, including the non-abelian case. As conjectured, the anyon sectors of the model correspond precisely to the irreducible representations of the quantum double algebra of $G$. Our proof consists of two main parts. In the first part, we construct for each irreducible representation of the quantum double algebra a pure state and show that the GNS representations of these pure states are pairwise disjoint anyon sectors. In the second part we show that any anyon sector is unitarily equivalent to one of the anyon sectors constructed in the first part. The first part of the proof crucially uses a description of the states in question as string-net condensates. Purity is shown by characterising these states as the unique states that satisfy appropriate sets of local constraints. At the core of the proof is the fact that certain groups of local gauge transformations act freely and transitively on collections of local string-nets. For the second part, we show that any anyon sector contains a pure state that satisfies all but a finite number of these constraints. Using known techniques we can then construct a pure state in the anyon sector that satisfies all but one of these constraints. Finally, we show explicitly that any such state must be a vector state in one of the anyon sectors constructed in the first part. | Mathematical Physics (math-ph) | This paper has been withdrawn by the authors due to errors in Sec. 5 and the proof of Prop. 6.6, which states that the irreps $π_{RC}$ constructed in the paper are anyon sectors. The other results of the paper are not affected by this error. Version 1 of the manuscript contains a correct proof of the fact that any anyon sector of the quantum double model is equivalent to one of the $π_{RC}$ | factual/methodological/other critical errors in manuscript | 13,700 |
2310.19673v3 | Design and Analysis of a Novel Radial Deployment Mechanism of Payloads in Sounding Rockets | This research paper introduces an innovative payload deployment mechanism tailored for sounding rockets, addressing a crucial challenge in the field. The problem statement revolves around the need to efficiently and compactly deploy multiple payloads during a single rocket launch. This mechanism, designed to be exceptionally suitable for sounding rockets, features a cylindrical carrier structure equipped with multiple independently operable deployment ports. Powered by a motor, the carrier structure rotates to enable radial ejection of payloads. In this paper, we present the mechanism's design and conduct a comprehensive performance analysis. This analysis encompasses an examination of structural stability, system dynamics, motor torque, and power requirements. Additionally, we develop a simulation model to assess payload deployment behavior under various conditions. Our findings demonstrate the viability and efficiency of this proposed mechanism for deploying multiple payloads within a single sounding rocket launch. Its adaptability to accommodate diverse payload types and sizes enhances its versatility. Moreover, the mechanism's radial deployment capability allows payloads to be released at different altitudes, thereby offering greater flexibility for scientific experiments. In summary, this innovative payload radial deployment mechanism represents a significant advancement in sounding rocket technology and holds promise for a wide array of applications in both scientific and commercial missions. | Systems and Control (eess.SY) | The results in this paper have to be verified again and hence a new paper has to be written | factual/methodological/other critical errors in manuscript | 13,701 |
2310.19967v3 | Early detection of inflammatory arthritis to improve referrals using multimodal machine learning from blood testing, semi-structured and unstructured patient records | Early detection of inflammatory arthritis (IA) is critical to efficient and accurate hospital referral triage for timely treatment and preventing the deterioration of the IA disease course, especially under limited healthcare resources. The manual assessment process is the most common approach in practice for the early detection of IA, but it is extremely labor-intensive and inefficient. A large amount of clinical information needs to be assessed for every referral from General Practice (GP) to the hospitals. Machine learning shows great potential in automating repetitive assessment tasks and providing decision support for the early detection of IA. However, most machine learning-based methods for IA detection rely on blood testing results. But in practice, blood testing data is not always available at the point of referrals, so we need methods to leverage multimodal data such as semi-structured and unstructured data for early detection of IA. In this research, we present fusion and ensemble learning-based methods using multimodal data to assist decision-making in the early detection of IA, and a conformal prediction-based method to quantify the uncertainty of the prediction and detect any unreliable predictions. To the best of our knowledge, our study is the first attempt to utilize multimodal data to support the early detection of IA from GP referrals. | Machine Learning (cs.LG) | We found some issues in data preprocessing, which will impact the final result. Therefore we would like to withdraw the paper | factual/methodological/other critical errors in manuscript | 13,702 |
2311.00387v2 | Mapping electrostatic potential in electrolyte solution | Mapping the electrostatic potential (ESP) distribution around ions in electrolyte solution is crucial for the establishment of a microscopic understanding of electrolyte solution properties. For solutions in the bulk phase, it has not been possible to measure the ESP distribution on Angstrom scale. Here we show that liquid electron scattering experiment using state-of-the-art relativistic electron beam can be used to measure the Debye screening length of aqueous LiCl, KCl, and KI solutions across a wide range of concentrations. We observe that the Debye screening length is long-ranged at low concentration and short-ranged at high concentration, providing key insight into the decades-long debate over whether the impact of ions in water is long-ranged or short-ranged. In addition, we show that the measured ESP can be used to retrieve the non-local dielectric function of electrolyte solution, which can serve as a promising route to investigate the electrostatic origin of special ion effects. Our observations show that, interaction, as one of the two fundamental perspectives for understanding electrolyte solution, can provide much richer information than structure. | Soft Condensed Matter (cond-mat.soft) | The small-angle signal in Fig. 2 C-H is highly likely to be an experimental artifact, due to that the electron beam is placed too close to the edge of the liquid sheet. This artifact invalidates the main conclusion of the paper | factual/methodological/other critical errors in manuscript | 13,705 |
2311.02297v2 | Candidate Members of the VMP/EMP Disk System of the Galaxy from the SkyMapper and SAGES Surveys | Photometric stellar surveys now cover a large fraction of the sky, probe to fainter magnitudes than large-scale spectroscopic studies, and are relatively free from the target-selection biases often associated with such studies. Photometric-metallicity estimates that include narrow/medium-band filters can achieve comparable accuracy and precision to existing low- and medium-resolution spectroscopic surveys such as SDSS/SEGUE and LAMOST, with metallicities as low as [Fe/H] $\sim -3.5$ to $-4.0$. Here we report on an effort to identify likely members of the Galactic disk system among the Very Metal-Poor (VMP; [Fe/H] $\leq$ -2) and Extremely Metal-Poor (EMP; [Fe/H] $\leq$ -3) stars. Our analysis is based on a sample of some 11.5 million stars with full space motions selected from the SkyMapper Southern Survey (SMSS) and Stellar Abundance and Galactic Evolution Survey (SAGES). After applying a number of quality cuts, designed to obtain the best available metallicity and dynamical estimates, we analyze a total of about 7.74 million stars in the combined SMSS/SAGES sample. We employ two techniques which, depending on the method, identify between 5,878 and 7,600 VMP stars (19% to 25% of all VMP stars) and between 345 and 399 EMP stars (35% to 40% of all EMP stars) that appear to be members of the Galactic disk system on highly prograde orbits (v$_{\phi} > 150$ kms$^{-1}$), the majority of which have low orbital eccentricities (ecc $\le 0.4$). The large fractions of VMP/EMP stars that are associated with the MW disk system strongly suggests the presence of an early forming ``primordial" disk. | Astrophysics of Galaxies (astro-ph.GA) | The sample of VMP/EMP disk stars suffered from a large amount of contamination due to the presence of unrecognized main-sequence binaries, and were mis-classified as dwarfs. The actual number of likely VMP/EMP disk stars is substantially smaller than reported here, and current results should not be used. We are redoing the analysis with a more stringent set of selection criteria | factual/methodological/other critical errors in manuscript | 13,709 |
2311.03808v2 | An operad structure on the free commutative monoid over a positive operad | We give the explicit description of an operad structure on the free commutative monoid E o q generated by a given positive operad q. This construction, new up to our knowledge, does not seem to be reachable through a distributive law. | Combinatorics (math.CO) | Withdrawed because of a major error: the object thus obtained is interesting but is NOT an operad. The error is in the proof of nested associativity (A2), top of Page 10, passage to the last line of the computation. Another preprint about this object is planned to come soon | factual/methodological/other critical errors in manuscript | 13,716 |
2311.04125v2 | New bounds in the Bogolyubov-Ruzsa lemma | We establish new bounds in the Bogolyubov-Ruzsa lemma, demonstrating that if A is a subset of a finite abelian group with density alpha, then 3A-3A contains a Bohr set of rank O(log^2 (2/alpha)) and radius Omega(log^{-2} (2/alpha)). The Bogolyubov-Ruzsa lemma is one of the deepest results in additive combinatorics, with a plethora of important consequences. In particular, we obtain new results toward the Polynomial Freiman-Ruzsa conjecture and improved bounds in Freiman's theorem. | Combinatorics (math.CO) | Mistake in Lemma 5.1 | factual/methodological/other critical errors in manuscript | 13,717 |
2311.04940v2 | Interpretable Geoscience Artificial Intelligence (XGeoS-AI): Application to Demystify Image Recognition | As Earth science enters the era of big data, artificial intelligence (AI) not only offers great potential for solving geoscience problems, but also plays a critical role in accelerating the understanding of the complex, interactive, and multiscale processes of Earth's behavior. As geoscience AI models are progressively utilized for significant predictions in crucial situations, geoscience researchers are increasingly demanding their interpretability and versatility. This study proposes an interpretable geoscience artificial intelligence (XGeoS-AI) framework to unravel the mystery of image recognition in the Earth sciences, and its effectiveness and versatility is demonstrated by taking computed tomography (CT) image recognition as an example. Inspired by the mechanism of human vision, the proposed XGeoS-AI framework generates a threshold value from a local region within the whole image to complete the recognition. Different kinds of artificial intelligence (AI) methods, such as Support Vector Regression (SVR), Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), can be adopted as the AI engines of the proposed XGeoS-AI framework to efficiently complete geoscience image recognition tasks. Experimental results demonstrate that the effectiveness, versatility, and heuristics of the proposed framework have great potential in solving geoscience image recognition problems. Interpretable AI should receive more and more attention in the field of the Earth sciences, which is the key to promoting more rational and wider applications of AI in the field of Earth sciences. In addition, the proposed interpretable framework may be the forerunner of technological innovation in the Earth sciences. | Computer Vision and Pattern Recognition (cs.CV) | there are some erros in the results, and a newer revision is still preparing | factual/methodological/other critical errors in manuscript | 13,720 |
2311.05922v2 | Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation Extraction | Few-shot relation extraction involves identifying the type of relationship between two specific entities within a text, using a limited number of annotated samples. A variety of solutions to this problem have emerged by applying meta-learning and neural graph techniques which typically necessitate a training process for adaptation. Recently, the strategy of in-context learning has been demonstrating notable results without the need of training. Few studies have already utilized in-context learning for zero-shot information extraction. Unfortunately, the evidence for inference is either not considered or implicitly modeled during the construction of chain-of-thought prompts. In this paper, we propose a novel approach for few-shot relation extraction using large language models, named CoT-ER, chain-of-thought with explicit evidence reasoning. In particular, CoT-ER first induces large language models to generate evidences using task-specific and concept-level knowledge. Then these evidences are explicitly incorporated into chain-of-thought prompting for relation extraction. Experimental results demonstrate that our CoT-ER approach (with 0% training data) achieves competitive performance compared to the fully-supervised (with 100% training data) state-of-the-art approach on the FewRel1.0 and FewRel2.0 datasets. | Computation and Language (cs.CL) | An error example is in Table 14 on Page 18. Need to carefully correct and evaluate the error | factual/methodological/other critical errors in manuscript | 13,723 |
2311.06394v2 | A design of Convolutional Neural Network model for the Diagnosis of the COVID-19 | With the spread of COVID-19 around the globe over the past year, the usage of artificial intelligence (AI) algorithms and image processing methods to analyze the X-ray images of patients' chest with COVID-19 has become essential. The COVID-19 virus recognition in the lung area of a patient is one of the basic and essential needs of clicical centers and hospitals. Most research in this field has been devoted to papers on the basis of deep learning methods utilizing CNNs (Convolutional Neural Network), which mainly deal with the screening of sick and healthy this http URL this study, a new structure of a 19-layer CNN has been recommended for accurately recognition of the COVID-19 from the X-ray pictures of chest. The offered CNN is developed to serve as a precise diagnosis system for a three class (viral pneumonia, Normal, COVID) and a four classclassification (Lung opacity, Normal, COVID-19, and pneumonia). A comparison is conducted among the outcomes of the offered procedure and some popular pretrained networks, including Inception, Alexnet, ResNet50, Squeezenet, and VGG19 and based on Specificity, Accuracy, Precision, Sensitivity, Confusion Matrix, and F1-score. The experimental results of the offered CNN method specify its dominance over the existing published procedures. This method can be a useful tool for clinicians in deciding properly about COVID-19. | Image and Video Processing (eess.IV) | Important mistakes. Also, another author has contributed some to the revised version. So it is not appropriate for it to be with only my name | factual/methodological/other critical errors in manuscript | 13,724 |
2311.06397v2 | Predicting Stock Price of Construction Companies using Weighted Ensemble Learning | Modeling the behavior of stock price data has always been one of the challengeous applications of Artificial Intelligence (AI) and Machine Learning (ML) due to its high complexity and dependence on various conditions. Recent studies show that this will be difficult to do with just one learning model. The problem can be more complex for companies of construction section, due to the dependency of their behavior on more conditions. This study aims to provide a hybrid model for improving the accuracy of prediction for stock price index of companies in construction section. The contribution of this paper can be considered as follows: First, a combination of several prediction models is used to predict stock price, so that learning models can cover each other's error. In this research, an ensemble model based on Artificial Neural Network (ANN), Gaussian Process Regression (GPR) and Classification and Regression Tree (CART) is presented for predicting stock price index. Second, the optimization technique is used to determine the effect of each learning model on the prediction result. For this purpose, first all three mentioned algorithms process the data simultaneously and perform the prediction operation. Then, using the Cuckoo Search (CS) algorithm, the output weight of each algorithm is determined as a coefficient. Finally, using the ensemble technique, these results are combined and the final output is generated through weighted averaging on optimal coefficients. The results showed that using CS optimization in the proposed ensemble system is highly effective in reducing prediction error. Comparing the evaluation results of the proposed system with similar algorithms, indicates that our model is more accurate and can be useful for predicting stock price index in real-world scenarios. | Applications (stat.AP) | important mistakes | factual/methodological/other critical errors in manuscript | 13,725 |
2311.07472v2 | Invariance principle and local limit theorem for a class of random conductance models with long-range jumps | We study continuous time random walks on $\mathbb{Z}^d$ (with $d \geq 2$) among random conductances $\{ \omega(\{x,y\}) : x,y \in \mathbb{Z}^d\}$ that permit jumps of arbitrary length. The law of the random variables $\omega(\{x,y\})$, taking values in $[0, \infty)$, is assumed to be stationary and ergodic with respect to space shifts. Assuming that the first moment of $\sum_{x \in \mathbb{Z}^d} \omega(\{0,x\}) |x|^2$ and the $q$-th moment of $1/\omega(0,x)$ for $x$ neighbouring the origin are finite for some $ q >d/2$, we show a quenched invariance principle and a quenched local limit theorem, where the moment condition is optimal for the latter. We also obtain Hölder regularity estimates for solutions of the heat equation for the associated non-local discrete operator, and deduce that the pointwise spectral dimension equals $d$ almost surely. Our results apply to random walks on long-range percolation graphs with connectivity exponents larger than $d+2$ when all nearest-neighbour edges are present. | Probability (math.PR) | There is a gap in the proof of Proposition 4.2, step 3 | factual/methodological/other critical errors in manuscript | 13,728 |
2311.09496v2 | Posterior-Mean Separable Costs of Information Acquisition | We analyze a problem of revealed preference given state-dependent stochastic choice data in which the payoff to a decision maker (DM) only depends on their beliefs about posterior means. Often, the DM must also learn about or pay attention to the state; in applied work on this subject, a convenient assumption is that the costs of such learning are linearly dependent in the distribution over posterior means. We provide testable conditions to identify whether this assumption holds. This allows for the use of information design techniques to solve the DM's problem. | Theoretical Economics (econ.TH) | As currently written, Lemma 6, which is central for the main result (Theorem 1), is incorrect. We need to try to replace it with a correct claim before this paper can be considered | factual/methodological/other critical errors in manuscript | 13,734 |
2311.09922v2 | Fast multiplication by two's complement addition of numbers represented as a set of polynomial radix 2 indexes, stored as an integer list for massively parallel computation | We demonstrate a multiplication method based on numbers represented as set of polynomial radix 2 indices stored as an integer list. The 'polynomial integer index multiplication' method is a set of algorithms implemented in python code. We demonstrate the method to be faster than both the Number Theoretic Transform (NTT) and Karatsuba for multiplication within a certain bit range. Also implemented in python code for comparison purposes with the polynomial radix 2 integer method. We demonstrate that it is possible to express any integer or real number as a list of integer indices, representing a finite series in base two. The finite series of integer index representation of a number can then be stored and distributed across multiple CPUs / GPUs. We show that operations of addition and multiplication can be applied as two's complement additions operating on the index integer representations and can be fully distributed across a given CPU / GPU architecture. We demonstrate fully distributed arithmetic operations such that the 'polynomial integer index multiplication' method overcomes the current limitation of parallel multiplication methods. Ie, the need to share common core memory and common disk for the calculation of results and intermediate results. | Mathematical Software (cs.MS) | withdrawn to review some details of the text for mathematical accuracy and proof. We plan to resubmit | factual/methodological/other critical errors in manuscript | 13,736 |
2311.11602v3 | A Multi-In-Single-Out Network for Video Frame Interpolation without Optical Flow | In general, deep learning-based video frame interpolation (VFI) methods have predominantly focused on estimating motion vectors between two input frames and warping them to the target time. While this approach has shown impressive performance for linear motion between two input frames, it exhibits limitations when dealing with occlusions and nonlinear movements. Recently, generative models have been applied to VFI to address these issues. However, as VFI is not a task focused on generating plausible images, but rather on predicting accurate intermediate frames between two given frames, performance limitations still persist. In this paper, we propose a multi-in-single-out (MISO) based VFI method that does not rely on motion vector estimation, allowing it to effectively model occlusions and nonlinear motion. Additionally, we introduce a novel motion perceptual loss that enables MISO-VFI to better capture the spatio-temporal correlations within the video frames. Our MISO-VFI method achieves state-of-the-art results on VFI benchmarks Vimeo90K, Middlebury, and UCF101, with a significant performance gap compared to existing approaches. | Computer Vision and Pattern Recognition (cs.CV) | Discovering a problem with the manuscript | factual/methodological/other critical errors in manuscript | 13,741 |
2311.12639v2 | KNVQA: A Benchmark for evaluation knowledge-based VQA | Within the multimodal field, large vision-language models (LVLMs) have made significant progress due to their strong perception and reasoning capabilities in the visual and language systems. However, LVLMs are still plagued by the two critical issues of object hallucination and factual accuracy, which limit the practicality of LVLMs in different scenarios. Furthermore, previous evaluation methods focus more on the comprehension and reasoning of language content but lack a comprehensive evaluation of multimodal interactions, thereby resulting in potential limitations. To this end, we propose a novel KNVQA-Eval, which is devoted to knowledge-based VQA task evaluation to reflect the factuality of multimodal LVLMs. To ensure the robustness and scalability of the evaluation, we develop a new KNVQA dataset by incorporating human judgment and perception, aiming to evaluate the accuracy of standard answers relative to AI-generated answers in knowledge-based VQA. This work not only comprehensively evaluates the contextual information of LVLMs using reliable human annotations, but also further analyzes the fine-grained capabilities of current methods to reveal potential avenues for subsequent optimization of LVLMs-based estimators. Our proposed VQA-Eval and corresponding dataset KNVQA will facilitate the development of automatic evaluation tools with the advantages of low cost, privacy protection, and reproducibility. Our code will be released upon publication. | Computer Vision and Pattern Recognition (cs.CV) | There was a little error in the method section of the paper | factual/methodological/other critical errors in manuscript | 13,746 |
2311.13010v2 | Beyond Catoni: Sharper Rates for Heavy-Tailed and Robust Mean Estimation | We study the fundamental problem of estimating the mean of a $d$-dimensional distribution with covariance $\Sigma \preccurlyeq \sigma^2 I_d$ given $n$ samples. When $d = 1$, Catoni \cite{catoni} showed an estimator with error $(1+o(1)) \cdot \sigma \sqrt{\frac{2 \log \frac{1}{\delta}}{n}}$, with probability $1 - \delta$, matching the Gaussian error rate. For $d>1$, a natural estimator outputs the center of the minimum enclosing ball of one-dimensional confidence intervals to achieve a $1-\delta$ confidence radius of $\sqrt{\frac{2 d}{d+1}} \cdot \sigma \left(\sqrt{\frac{d}{n}} + \sqrt{\frac{2 \log \frac{1}{\delta}}{n}}\right)$, incurring a $\sqrt{\frac{2d}{d+1}}$-factor loss over the Gaussian rate. When the $\sqrt{\frac{d}{n}}$ term dominates by a $\sqrt{\log \frac{1}{\delta}}$ factor, \cite{lee2022optimal-highdim} showed an improved estimator matching the Gaussian rate. This raises a natural question: is the Gaussian rate achievable in general? Or is the $\sqrt{\frac{2 d}{d+1}}$ loss \emph{necessary} when the $\sqrt{\frac{2 \log \frac{1}{\delta}}{n}}$ term dominates? We show that the answer to both these questions is \emph{no} -- we show that \emph{some} constant-factor loss over the Gaussian rate is necessary, but construct an estimator that improves over the above naive estimator by a constant factor. We also consider robust estimation, where an adversary is allowed to corrupt an $\epsilon$-fraction of samples arbitrarily: in this case, we show that the above strategy of combining one-dimensional estimates and incurring the $\sqrt{\frac{2d}{d+1}}$-factor \emph{is} optimal in the infinite-sample limit. | Statistics Theory (math.ST) | Error in proof of Theorem 1.1 | factual/methodological/other critical errors in manuscript | 13,747 |
2311.13089v2 | Quantum-spin-liquid state in kagome YCu$_3$(OH)$_6$[(Cl$_x$Br$_{1-x}$)$_{3-y}$(OH)$_{y}$]: The role of alternate-bond hexagons and beyond | Quantum spin liquids are exotic states of spin systems characterized by long-range entanglement and emergent fractionalized quasiparticles. One of the challenges in identifying quantum spin liquids in real materials lies in distinguishing them from trivial paramagnetic states. Here we studied the magnetic properties of a kagome system YCu$_3$(OH)$_6$[(Cl$_x$Br$_{1-x}$)$_{3-y}$(OH)$_{y}$]. In this system, some of the hexagons exhibit alternate bonds along the Cu-O-Cu exchange paths, while others remain uniform. We found that a long-range antiferromagnetic order emerges when uniform hexagons dominate. However, when the order disappears due to the increase number of alternate-bond hexagons, two different types of paramagnetic states are found. Interestingly, even though their proportions of alternate-bond hexagons are similar, these two paramagnetic states exhibit dramatically different low-temperature specific-heat behaviors, which can be attributed to the properties of a quantum-spin-liquid state versus a trivial paramagnetic state. Our results demonstrate that the formation of the quantum spin liquid in this system is accompanied by a substantial increase of low-energy entropy, likely arising from emergent fractionalized quasiparticles. Thus, this system provides a unique platform for studying how to differentiate a quantum spin liquid from a trivial paramagnetic state. | Strongly Correlated Electrons (cond-mat.str-el) | A major flaw has been found in region III. A revision will appear later | factual/methodological/other critical errors in manuscript | 13,749 |
2311.13404v2 | Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions | We present a novel animatable 3D Gaussian model for rendering high-fidelity free-view human motions in real time. Compared to existing NeRF-based methods, the model owns better capability in synthesizing high-frequency details without the jittering problem across video frames. The core of our model is a novel augmented 3D Gaussian representation, which attaches each Gaussian with a learnable code. The learnable code serves as a pose-dependent appearance embedding for refining the erroneous appearance caused by geometric transformation of Gaussians, based on which an appearance refinement model is learned to produce residual Gaussian properties to match the appearance in target pose. To force the Gaussians to learn the foreground human only without background interference, we further design a novel alpha loss to explicitly constrain the Gaussians within the human body. We also propose to jointly optimize the human joint parameters to improve the appearance accuracy. The animatable 3D Gaussian model can be learned with shallow MLPs, so new human motions can be synthesized in real time (66 fps on avarage). Experiments show that our model has superior performance over NeRF-based methods. | Computer Vision and Pattern Recognition (cs.CV) | Some experiment data is wrong. The expression of the paper in introduction and abstract is incorrect. Some graphs have inappropriate descriptions | factual/methodological/other critical errors in manuscript | 13,750 |
2311.13746v2 | Quantum Loop effects to Primordial perturbations at the end of Type III hilltop inflation models | In this work, we analytically calculate the spectra of primordial perturbations at the end of Type III hilltop inflation models under the slow-roll approximation. We examine the one-loop corrections of the spectra and find that those from the inflaton self-interaction are negligible. On the contrary, the loop effects from the interaction between the inflaton field and the waterfall field can be significant when the vacuum expectation value of the waterfall field is small. The implications are discussed. | Cosmology and Nongalactic Astrophysics (astro-ph.CO) | There was an error which changed the conclusion | factual/methodological/other critical errors in manuscript | 13,751 |
2311.14627v4 | A new use of the Kurdyka-Lojasiewicz property to study asymptotic behaviours of some stochastic optimization algorithms in a non-convex differentiable framework | The asymptotic analysis of a generic stochastic optimization algorithm mainly relies on the establishment of a specific descent condition. While the convexity assumption allows for technical shortcuts and generally leads to strict convergence, dealing with the non-convex framework, on the contrary, requires the use of specific results as those relying on the Kurdyka-Lojasiewicz (KL) theory. While such tools have become popular in the field of deterministic optimisation, they are much less widespread in the stochastic context and, in this case, the few works making use of them are essentially based on trajectory-by-trajectory approaches. In this paper, we propose a new methodology, also based on KL theory, for deeper asymptotic investigations on a stochastic scheme verifying a descent condition. The specificity of our work is here to be of macroscopic nature insofar as our strategy of proof is more in-expectation-based and therefore seems more natural typically with respect to the noise properties, of conditional order, encountered in the stochastic literature nowadays. | Optimization and Control (math.OC) | Mathematical mistake in section 5 (lemma 5.5 and so theorem 5.6) | factual/methodological/other critical errors in manuscript | 13,755 |
2311.14630v4 | SYK Model in a Non-Gaussian disorder ensemble and emergent Coleman's mechanism | We consider the case of the SYK model with non-gaussian disorder in the large $N$ limit. After obtaining the effective action, we derive the density of states and the free energy of the modified theory. We show that the non-gaussian disorder corresponds to a non-local Liouville theory, and non-minimally coupled 2D gravity action. It also provides a nice realization of Colemania - Coleman's idea from the 80s of generating a small Cosmological Constant. Finally, we also calculate out of time order correlation functions (OTOC) for the model. | High Energy Physics - Theory (hep-th) | The current version has error in eq 2.5 which effects the subsequent calculations | factual/methodological/other critical errors in manuscript | 13,756 |
2311.14724v3 | Comment on "Work Fluctuations for a Harmonically Confined Active Ornstein-Uhlenbeck Particle" | We argue that the results presented by M. Semeraro et al. [Phys. Rev. Lett. 131, 158302 (2023)] for the work fluctuations of an active Ornstein-Uhlenbeck particle trapped in a harmonic potential are incomplete and partly inexact due to the possible divergence of the generating function at a finite time | Statistical Mechanics (cond-mat.stat-mech) | Withdrawn due a flaw in the argument, as pointed out by the authors of Ref. 1. The divergences of the generating function at finite time (which do exist) do not modify the domain of existence of the SCGF computed in Ref. 1. This will be explained in a forthcoming paper | factual/methodological/other critical errors in manuscript | 13,757 |
2311.14958v2 | Monge-Ampère equation on compact Hermitian manifolds | Given a cohomology $(1,1)$-class $\{\beta\}$ of compact Hermitian manifold $(X,\omega)$ such that there exists a bounded potential in $\{\beta\}$, we show that degenerate complex Monge-Ampère equation $(\beta+dd^c \varphi)^n=\mu$ has a unique solution in the full mass class $\mathcal{E}(X,\beta)$, where $\mu$ is any probability measure on $X$ which does not charge pluripolar subset. We also study other Monge-Ampère types equations which correspond to $\lambda>0$ and $\lambda<0$. As a preparation to the $\lambda<0$ case, we give a general answer to an open problem about the Lelong number which was surveyed by Dinew-Guedj-Zeriahi \cite[Problem 36]{DGZ16}. Moreover, we obtain more general results on singular space and of the equations with prescribed singularity if the model potential has small unbounded locus. These results generalize much recent work of \cite{EGZ09}\cite{BBGZ13}\cite{DNL18}\cite{LWZ23} etc. | Differential Geometry (math.DG) | There are some errors about citation and details needing to be improved | factual/methodological/other critical errors in manuscript | 13,759 |
2311.15153v2 | Self-Supervised Learning for SAR ATR with a Knowledge-Guided Predictive Architecture | Recently, the emergence of a large number of Synthetic Aperture Radar (SAR) sensors and target datasets has made it possible to unify downstream tasks with self-supervised learning techniques, which can pave the way for building the foundation model in the SAR target recognition field. The major challenge of self-supervised learning for SAR target recognition lies in the generalizable representation learning in low data quality and this http URL address the aforementioned problem, we propose a knowledge-guided predictive architecture that uses local masked patches to predict the multiscale SAR feature representations of unseen context. The core of the proposed architecture lies in combining traditional SAR domain feature extraction with state-of-the-art scalable self-supervised learning for accurate generalized feature representations. The proposed framework is validated on various downstream datasets (MSTAR, FUSAR-Ship, SAR-ACD and SSDD), and can bring consistent performance improvement for SAR target recognition. The experimental results strongly demonstrate the unified performance improvement of the self-supervised learning technique for SAR target recognition across diverse targets, scenes and sensors. | Computer Vision and Pattern Recognition (cs.CV) | We found that the LoMaR framework has the possibility that half-precision calculations will have NaN values, so we need to modified the framework and experiments significantly, and the current version is not suitable as a reference | factual/methodological/other critical errors in manuscript | 13,760 |
2311.15758v2 | A symmetry breaking phenomenon for anisotropic harmonic maps from a 2D annulus into $\mathbb S^1$ | In a two dimensional annulus $A_\rho=\{x\in \mathbb R^2: \rho<|x|<1\}$, $\rho\in (0,1)$, we characterize $0$-homogeneous minimizers, in $H^1(A_\rho;\mathbb S^1)$ with respect to their own boundary conditions, of the anisotropic energy \begin{equation*} E_\delta(u)=\int_{A_\rho} |\nabla u|^2 +\delta \left( (\nabla\cdot u)^2-(\nabla\times u)^2\right) \, dx,\quad \delta\in (-1,1). \end{equation*} Even for a small anisotropy $0<|\delta|\ll 1$, we exhibit qualitative properties very different from the isotropic case $\delta=0$. In particular, $0$-homogeneous critical points of degree $d\notin \lbrace 0,1,2\rbrace$ are always local minimizers, but in thick annuli ($\rho\ll 1$) they are not minimizers: the $0$-homogeneous symmetry is broken. One corollary is that entire solutions to the anisotropic Ginzburg-Landau system have a far-field behavior very different from the isotropic case studied by Brezis, Merle and Rivière. The tools we use include: ODE and variational arguments; asymptotic expansions, interpolation inequalities and explicit computations involving near-optimizers of these inequalities for proving that $0$-homogeneous critical points are not minimizers in thick annuli. | Analysis of PDEs (math.AP) | The third item in Theorem 1.1 is wrong. Minimality of the homogeneous critical point follows from calculations in an upcoming work by P. Bauman and D. Phillips | factual/methodological/other critical errors in manuscript | 13,762 |
2311.16594v3 | Monitor Placement for Fault Localization in Deep Neural Network Accelerators | Systolic arrays are a prominent choice for deep neural network (DNN) accelerators because they offer parallelism and efficient data reuse. Improving the reliability of DNN accelerators is crucial as hardware faults can degrade the accuracy of DNN inferencing. Systolic arrays make use of a large number of processing elements (PEs) for parallel processing, but when one PE is faulty, the error propagates and affects the outcomes of downstream PEs. Due to the large number of PEs, the cost associated with implementing hardware-based runtime monitoring of every single PE is infeasible. We present a solution to optimize the placement of hardware monitors within systolic arrays. We first prove that $2N-1$ monitors are needed to localize a single faulty PE and we also derive the monitor placement. We show that a second placement optimization problem, which minimizes the set of candidate faulty PEs for a given number of monitors, is NP-hard. Therefore, we propose a heuristic approach to balance the reliability and hardware resource utilization in DNN accelerators when number of monitors is limited. Experimental evaluation shows that to localize a single faulty PE, an area overhead of only 0.33% is incurred for a $256\times 256$ systolic array. | Hardware Architecture (cs.AR) | Technical fallacies appear in this paper | factual/methodological/other critical errors in manuscript | 13,767 |
2311.17297v2 | Stability control for USVs with SINDY-based online dynamic model update | Unmanned Surface Vehicles (USVs) play a pivotal role in various applications, including surface rescue, commercial transactions, scientific exploration, water rescue, and military operations. The effective control of high-speed unmanned surface boats stands as a critical aspect within the overall USV system, particularly in challenging environments marked by complex surface obstacles and dynamic conditions, such as time-varying surges, non-directional forces, and unpredictable winds. In this paper, we propose a data-driven control method based on Koopman theory. This involves constructing a high-dimensional linear model by mapping a low-dimensional nonlinear model to a higher-dimensional linear space through data identification. The observable USVs dynamical system is dynamically reconstructed using online error learning. To enhance tracking control accuracy, we utilize a Constructive Lyapunov Function (CLF)-Control Barrier Function (CBF)-Quadratic Programming (QP) approach to regulate the high-dimensional linear dynamical system obtained through identification. This approach facilitates error compensation, thereby achieving more precise tracking control. | Robotics (cs.RO) | There were serious defects in the results of the simulation experiment in the later inspection. Due to the wrong data comparison, the final implementation effect was wrong. We will reevaluate the feasibility and usability of the paper implementation in the future and submit it again | factual/methodological/other critical errors in manuscript | 13,771 |
2311.17460v5 | Capturing Human Motion from Monocular Images in World Space with Weak-supervised Calibration | Previous methods for 3D human motion recovery from monocular images often fall short due to reliance on camera coordinates, leading to inaccuracies in real-world applications where complex shooting conditions are prevalent. The limited availability and diversity of focal length labels further exacerbate misalignment issues in reconstructed 3D human bodies. To address these challenges, we introduce W-HMR, a weak-supervised calibration method that predicts "reasonable" focal lengths based on body distortion information, eliminating the need for precise focal length labels. Our approach enhances 2D supervision precision and recovery accuracy. Additionally, we present the OrientCorrect module, which corrects body orientation for plausible reconstructions in world space, avoiding the error accumulation associated with inaccurate camera rotation predictions. Our contributions include a novel weak-supervised camera calibration technique, an effective orientation correction module, and a decoupling strategy that significantly improves the generalizability and accuracy of human motion recovery in both camera and world coordinates. The robustness of W-HMR is validated through extensive experiments on various datasets, showcasing its superiority over existing methods. Codes and demos have been released on the project page this https URL . | Computer Vision and Pattern Recognition (cs.CV) | There are some errors in the first section | factual/methodological/other critical errors in manuscript | 13,774 |
2311.18239v2 | A unified continuous greedy algorithm for $k$-submodular maximization under a down-monotone constraint | A $k$-submodular function is a generalization of the submodular set function. Many practical applications can be modeled as maximizing a $k$-submodular function, such as multi-cooperative games, sensor placement with $k$ type sensors, influence maximization with $k$ topics, and feature selection with $k$ partitions. In this paper, we provide a unified continuous greedy algorithm for $k$-submodular maximization problem under a down-monotone constraint. Our technique involves relaxing the discrete variables in a continuous space by using the multilinear extension of $k$-submodular function to find a fractional solution, and then rounding it to obtain the feasible solution. Our proposed algorithm runs in polynomial time and can be applied to both the non-monotone and monotone cases. When the objective function is non-monotone, our algorithm achieves an approximation ratio of $(1/e-o(1))$; for a monotone $k$-submodular objective function, it achieves an approximation ratio of $(1-1/e-o(1))$. | Combinatorics (math.CO) | There are unknown errors that contradict the inapproximability of unconstrained monotone k-submodular maximization | factual/methodological/other critical errors in manuscript | 13,780 |
2312.00609v2 | Massey iterated products and closed geodesics | In this paper, we show that the existence of two sequences of Massey iterated product containing zero in the cohomology of a 1-connected CW complex of finite type $X$ directly bears on the unbounded growth of the Betti numbers of the free loop space of $X$. | Algebraic Topology (math.AT) | there is many mistake in this version | factual/methodological/other critical errors in manuscript | 13,787 |
2312.01001v2 | Learning county from pixels: Corn yield prediction with attention-weighted multiple instance learning | Remote sensing technology has become a promising tool in yield prediction. Most prior work employs satellite imagery for county-level corn yield prediction by spatially aggregating all pixels within a county into a single value, potentially overlooking the detailed information and valuable insights offered by more granular data. To this end, this research examines each county at the pixel level and applies multiple instance learning to leverage detailed information within a county. In addition, our method addresses the "mixed pixel" issue caused by the inconsistent resolution between feature datasets and crop mask, which may introduce noise into the model and therefore hinder accurate yield prediction. Specifically, the attention mechanism is employed to automatically assign weights to different pixels, which can mitigate the influence of mixed pixels. The experimental results show that the developed model outperforms four other machine learning models over the past five years in the U.S. corn belt and demonstrates its best performance in 2022, achieving a coefficient of determination (R2) value of 0.84 and a root mean square error (RMSE) of 0.83. This paper demonstrates the advantages of our approach from both spatial and temporal perspectives. Furthermore, through an in-depth study of the relationship between mixed pixels and attention, it is verified that our approach can capture critical feature information while filtering out noise from mixed pixels. | Computer Vision and Pattern Recognition (cs.CV) | I am writing to request the withdrawal of my paper submitted to arXiv. Upon further review, I have identified an error in the paper that significantly affects the results and conclusions. To maintain the integrity of the scientific record and prevent the dissemination of incorrect information, I believe it is necessary to withdraw the paper from the archive | factual/methodological/other critical errors in manuscript | 13,789 |
2312.01890v3 | Optical anisotropy and nonlinearity in deep ultraviolet fluorooxoborates | Optical anisotropy and nonlinearity are two tantalizingly important and enticing properties of an optical crystal. Combining these two features will have a miraculous effect. The up conversion can extend solid state laser sources to the ultraviolet and deep ultraviolet (DUV) ranges through harmonic generation and for down conversion needed for quantum information technology, but only a few suitable materials are known as the medium because of the combination of properties that are required. These include suitable band gaps, moderate optical anisotropy for phase matching and strong nonlinear optical (NLO) response. Fluorooxoborates are a new ideal platform for this effect in DUV. Here we demonstrate that fluorooxoborate is the optimal framework for DUV NLO material and show that the significance of the incorporation of fluorine in borates. The NLO performance of fluorooxoborates is strongly improved in terms of local crystal structure and distribution of electronic states. Importantly, the role of fluorine is to control the structure, while maintaining high band gaps but does not directly provide large contributions to birefringence and the second harmonic generation as the conventional assumptions. This is a consequence of the microscopic electron distribution and the energy position of the fluorine states well below the valence band maxima. Based on our understandings, we constructed two artificial structure and they all behave as anticipated. | Optics (physics.optics) | This manuscript contain many errors | factual/methodological/other critical errors in manuscript | 13,793 |
2312.02184v2 | Channel-Feedback-Free Transmission for Downlink FD-RAN: A Radio Map based Complex-valued Precoding Network Approach | As the demand for high-quality services proliferates, an innovative network architecture, the fully-decoupled RAN (FD-RAN), has emerged for more flexible spectrum resource utilization and lower network costs. However, with the decoupling of uplink base stations and downlink base stations in FD-RAN, the traditional transmission mechanism, which relies on real-time channel feedback, is not suitable as the receiver is not able to feedback accurate and timely channel state information to the transmitter. This paper proposes a novel transmission scheme without relying on physical layer channel feedback. Specifically, we design a radio map based complex-valued precoding network~(RMCPNet) model, which outputs the base station precoding based on user location. RMCPNet comprises multiple subnets, with each subnet responsible for extracting unique modal features from diverse input modalities. Furthermore, the multi-modal embeddings derived from these distinct subnets are integrated within the information fusion layer, culminating in a unified representation. We also develop a specific RMCPNet training algorithm that employs the negative spectral efficiency as the loss function. We evaluate the performance of the proposed scheme on the public DeepMIMO dataset and show that RMCPNet can achieve 16\% and 76\% performance improvements over the conventional real-valued neural network and statistical codebook approach, respectively. | Information Theory (cs.IT) | Content error, and I don't like to make it public now | factual/methodological/other critical errors in manuscript | 13,795 |
2312.02498v2 | Provable Reinforcement Learning for Networked Control Systems with Stochastic Packet Disordering | This paper formulates a stochastic optimal control problem for linear networked control systems featuring stochastic packet disordering with a unique stabilizing solution certified. The problem is solved by proposing reinforcement learning algorithms. A measurement method is first presented to deal with PD and calculate the newest control input. The NCSs with stochastic PD are modeled as stochastic NCSs. Then, given a cost function, a modified algebraic Riccati equation is derived within the formulation. We propose offline policy iteration and value iteration algorithms to solve the MARE associated with provable convergence. These two algorithms require knowledge of NCS dynamics and PD probabilities. To release that, we further design online model-free off-policy and Q-learning algorithms with an online estimation method for PD probability. Both model-free algorithms solve the optimal control problem using real-time system states, control inputs, and PD probability estimates. Simulation results verify the proposed formulation and algorithms at last. | Systems and Control (eess.SY) | This is a wrong version with problem setting and description errors in main sections | factual/methodological/other critical errors in manuscript | 13,796 |
2312.04025v2 | Moirai: Towards Optimal Placement for Distributed Inference on Heterogeneous Devices | The escalating size of Deep Neural Networks (DNNs) has spurred a growing research interest in hosting and serving DNN models across multiple devices. A number of studies have been reported to partition a DNN model across devices, providing device placement solutions. The methods appeared in the literature, however, either suffer from poor placement performance due to the exponential search space or miss an optimal placement as a consequence of the reduced search space with limited heuristics. Moreover, these methods have ignored the runtime inter-operator optimization of a computation graph when coarsening the graph, which degrades the end-to-end inference performance. This paper presents Moirai that better exploits runtime inter-operator fusion in a model to render a coarsened computation graph, reducing the search space while maintaining the inter-operator optimization provided by inference backends. Moirai also generalizes the device placement algorithm from multiple perspectives by considering inference constraints and device this http URL experimental evaluation with 11 large DNNs demonstrates that Moirai outperforms the state-of-the-art counterparts, i.e., Placeto, m-SCT, and GETF, up to 4.28$\times$ in reduction of the end-to-end inference latency. Moirai code is anonymously released at \url{ this https URL }. | Distributed, Parallel, and Cluster Computing (cs.DC) | Temprarily withdraw to amend experimental results | factual/methodological/other critical errors in manuscript | 13,803 |
2312.04119v2 | A Multilevel Guidance-Exploration Network and Behavior-Scene Matching Method for Human Behavior Anomaly Detection | Human behavior anomaly detection aims to identify unusual human actions, playing a crucial role in intelligent surveillance and other areas. The current mainstream methods still adopt reconstruction or future frame prediction techniques. However, reconstructing or predicting low-level pixel features easily enables the network to achieve overly strong generalization ability, allowing anomalies to be reconstructed or predicted as effectively as normal data. Different from their methods, inspired by the Student-Teacher Network, we propose a novel framework called the Multilevel Guidance-Exploration Network(MGENet), which detects anomalies through the difference in high-level representation between the Guidance and Exploration network. Specifically, we first utilize the pre-trained Normalizing Flow that takes skeletal keypoints as input to guide an RGB encoder, which takes unmasked RGB frames as input, to explore motion latent features. Then, the RGB encoder guides the mask encoder, which takes masked RGB frames as input, to explore the latent appearance feature. Additionally, we design a Behavior-Scene Matching Module(BSMM) to detect scene-related behavioral anomalies. Extensive experiments demonstrate that our proposed method achieves state-of-the-art performance on ShanghaiTech and UBnormal datasets, with AUC of 86.9 % and 73.5 %, respectively. The code will be available on this https URL . | Computer Vision and Pattern Recognition (cs.CV) | The experimental methods and results are incorrect and need to be revised | factual/methodological/other critical errors in manuscript | 13,804 |
2312.05804v3 | Layered 3D Human Generation via Semantic-Aware Diffusion Model | The generation of 3D clothed humans has attracted increasing attention in recent years. However, existing work cannot generate layered high-quality 3D humans with consistent body structures. As a result, these methods are unable to arbitrarily and separately change and edit the body and clothing of the human. In this paper, we propose a text-driven layered 3D human generation framework based on a novel physically-decoupled semantic-aware diffusion model. To keep the generated clothing consistent with the target text, we propose a semantic-confidence strategy for clothing that can eliminate the non-clothing content generated by the model. To match the clothing with different body shapes, we propose a SMPL-driven implicit field deformation network that enables the free transfer and reuse of clothing. Besides, we introduce uniform shape priors based on the SMPL model for body and clothing, respectively, which generates more diverse 3D content without being constrained by specific templates. The experimental results demonstrate that the proposed method not only generates 3D humans with consistent body structures but also allows free editing in a layered manner. The source code will be made public. | Computer Vision and Pattern Recognition (cs.CV) | Error in the derivation of equation 11 in section 4.3.1 | factual/methodological/other critical errors in manuscript | 13,808 |
2312.06914v3 | Exploring Novel Object Recognition and Spontaneous Location Recognition Machine Learning Analysis Techniques in Alzheimer's Mice | Understanding object recognition patterns in mice is crucial for advancing behavioral neuroscience and has significant implications for human health, particularly in the realm of Alzheimer's research. This study is centered on the development, application, and evaluation of a state-of-the-art computational pipeline designed to analyze such behaviors, specifically focusing on Novel Object Recognition (NOR) and Spontaneous Location Recognition (SLR) tasks. The pipeline integrates three advanced computational models: Any-Maze for initial data collection, DeepLabCut for detailed pose estimation, and Convolutional Neural Networks (CNNs) for nuanced behavioral classification. Employed across four distinct mouse groups, this pipeline demonstrated high levels of accuracy and robustness. Despite certain challenges like video quality limitations and the need for manual calculations, the results affirm the pipeline's efficacy and potential for scalability. The study serves as a proof of concept for a multidimensional computational approach to behavioral neuroscience, emphasizing the pipeline's versatility and readiness for future, more complex analyses. | Machine Learning (cs.LG) | Aspects of the paper contain errors, and data in the pipeline must be vetted one more time. More testing is necessary | factual/methodological/other critical errors in manuscript | 13,813 |
2312.06987v3 | A new lightweight additive homomorphic encryption algorithm | This article describes a lightweight additive homomorphic algorithm with the same encryption and decryption keys. Compared to standard additive homomorphic algorithms like Paillier, this algorithm reduces the computational cost of encryption and decryption from modular exponentiation to modular multiplication, and reduces the computational cost of ciphertext addition from modular multiplication to modular addition. This algorithm is based on a new mathematical problem: in two division operations, whether it is possible to infer the remainder or divisor based on the dividend when two remainders are related. Currently, it is not obvious how to break this problem, but further exploration is needed to determine if it is sufficiently difficult. In addition to this mathematical problem, we have also designed two interesting mathematical structures for decryption, which are used in the two algorithms mentioned in the main text. It is possible that the decryption structure of Algorithm 2 introduces new security vulnerabilities, but we have not investigated this issue thoroughly. | Cryptography and Security (cs.CR) | This algorithm proposed in this paper has serious security problem. It can be attacked by Orthogonal lattice | factual/methodological/other critical errors in manuscript | 13,815 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.