text
string
source
string
Patch Success Rate, Realism Score Cin` a et al. [11] Image Gradient-based Attacks Adversarial Success Rate Zheng et al. [12] Graph Attacks on Graphs Adversarial Robustness on Graphs Li et al. [13] VQA Attacks on VQA VQA Accuracy under Adversarial Conditions Siddiqui et al. [14] Time-series Attacks on Time-Series Time-Series Attack Success Rate Our paper Tabular Attacks on Tabular Data Attack Success Rate, Imperceptibility Metrics imperceptibility is measured by slight pixel changes that remain “indistinguishable to the human eye” [15], they do not translate directly to tabular data. Adversarial examples are created by strategically perturbing these pixel values to cause misclassification by machine learning models, while still preserving the visual similarity to the original image. In contrast, tabular data presents unique challenges for adversarial attacks. Any changes to feature values can be easily spotted, making imperceptibility a more complex issue. For example, in image-based attacks, imperceptibility is concerned with altering pixels without compromising the visual integrity of the image. However, in tabular data, the definition of imperceptibility must account for human detectability across various feature dimensions. Our benchmark includes the concept of imperceptibility for tabular data by focusing on four key quantitative properties: Proximity ,Sparsity ,Deviation , and Sensitivity [16]. These properties ensure that adversarial examples closely resemble original data, minimise feature alterations, and respect the statistical distribution of the data. Additionally, while existing benchmarks are primarily focused on effectiveness (attack success rates), our benchmark uniquely combines this with an evaluation of imperceptibility, offering a more comprehensive assessment of adversarial robustness for tabular data. The in- corporation of these imperceptibility metrics differentiates our work from prior benchmarks, which mostly focus on images, graphs, or time-series data and lack a detailed assessment of imperceptibility, particularly in the context of tabular datasets. 4 1.3. Contribution This paper aims to address three main research questions. First, how effective are the evaluated adversarial attack algorithms on tabular data? Second, how imperceptible are these adversarial attack algorithms on tabular data? Finally, whether and how the evaluated algorithms can achieve a balance between both imperceptibility and effectiveness , striving for optimal results in both aspects. To address these three questions, the paper evaluate the effectiveness of various adversarial attack algorithms on tabular data, examines the imperceptibility of these attacks, and explores the potential for achieving a balance between imperceptibility and effectiveness. In the remainder of this paper, we explore the current state of adversarial attack research, with a particular focus on tabular data (Section 2). We propose a benchmark evaluation of the effectiveness and imperceptibility of adversarial attacks on tabular data (Section 3). By examining the trade-offs between attack success rates and imperceptibility, our evalu- ation framework provides valuable insights for developing both effective and imperceptible adversarial attacks (Section 4). Our analysis further illuminates how different attack strate- gies prioritise either maximising attack imperceptibility or attack effectiveness, enabling researchers to strategically balance these competing objectives when designing novel adver- sarial techniques for tabular data (Section 5). 2. Background and Related Work 2.1. Adversarial Attacks Adversarial attacks aim to mislead a machine learning model into making incorrect clas- sifications
https://arxiv.org/abs/2505.21027v1
by generating deliberately perturbed input data, known as adversarial examples . Consider a dataset where each input data point, represented by a vector x∈ X, is associated with a class label y∈ Y. We define a machine learning classifier f(·). An adversarial exam- plexadvis a perturbed variant of xthat remains similar to xbut is specifically designed to cause the classifier to incorrectly predict the label of x. This can be mathematically defined as: xadv=x+δsubject to f(xadv)̸=y 5 where δdenotes input perturbation. A comprehensive taxonomy of adversarial attacks can be categorised based on various di- mensions, considering factors such as the adversary’s goals, capabilities, and knowledge [17]. 2.1.1. Adversary’s Goals These can be categorised into four types based on how the adversarial perturbation affects the model’s classification output [17, 18]: (1) Confidence reduction undermines prediction certainty without changing class labels, (2) Nontargeted misclassification seeks any form of classification error regardless of outcome specificity, (3) Targeted misclassification forces specific wrong outputs, and (4) Source/target misclassification requires dual specification of input and output classes. Our focus lies on nontargeted misclassification , where the attacker’s goal is to cause the model to produce any incorrect class prediction, regardless of specificity. For example, in a credit scoring model that predicts the likelihood of loan repayment based on tabular data such as income, credit history, and debt, this could mean changing the classification of a “low-risk” borrower to any other incorrect class, such as “medium” or “high-risk”, without the attacker caring which specific misclassification occurs. Unlike targeted attacks, this approach does not require a specific incorrect outcome — its success depends only on causing a mismatch with the ground-truth label. This category is particularly consequential for tabular data systems, where structured feature interdependencies amplify the potential effects of even arbitrary misclassifications. Its computational efficiency—since it does not require targeting specific classes—and its broad relevance—since any error can disrupt real-world systems—make it a priority in our benchmark. 2.1.2. Adversary’s Capabilities Adversarial attacks can also be classified based on the capabilities of the adversary, specifically in terms of how much control they have over the perturbations applied to the input data. These capabilities are generally divided into two categories: unbounded attacks 6 and bounded attacks. Each of these attack types represents different levels of access and constraints that an adversary may face when attempting to compromise a model. •Unbounded Attacks: In an unbounded attack, the adversary has no restrictions on the magnitude or extent of the perturbations they can apply to the input data. Unbounded attacks attempt to minimise the distance between input xand adversar- ial example xadvto obtain the minimal perturbation δwithout constraints on the magnitude of attack perturbation. min∥δ∥subject to f(xadv)̸=y (1) •Bounded Attacks: Bounded attacks are more constrained, as the adversary is limited in how much they can perturb the input data. These attacks are defined by a upper bound ϵon the amount of changes, such as keeping perturbation within a certain range. The goal is to find an adversarial example xadv, which has perturbation δ, within the budget ϵ, to an input xthat misleads the prediction
https://arxiv.org/abs/2505.21027v1
by maximising the loss function Lof the machine learning model being attacked. maxL(f(xadv), y) subject to ∥δ∥ ≤ϵ (2) Our benchmark will encompass both unbounded and bounded attacks to provide a com- prehensive evaluation of adversarial attacks on tabular data. 2.1.3. Adversary’s Knowledge Adversarial attacks are categorised into three primary threat models based on the adver- sary’s knowledge of the target system: white-box ,gray-box , and black-box attacks [17, 18]. These classifications reflect the attacker’s access to knowledge about the model’s architec- ture, parameters, and training data, ranging from full transparency (white-box) to complete opacity (black-box), with gray-box scenarios representing partial knowledge. Among these, white-box attacks pose the most severe threat, as they leverage comprehensive model insights 7 to craft precise adversarial perturbations. Awhite-box attack represents the most severe threat scenario, where the adversary possesses full knowledge of the target model, including its architecture (e.g., layer types, activation functions), trained parameters (weights and biases), and the distribution of train- ing data. This complete access allows the attacker to compute exact gradients of the loss function with respect to the input data, enabling highly optimised adversarial example gen- eration through techniques like gradient ascent. By exploiting the model’s mathematical structure, adversaries can systematically identify input perturbations that maximally de- grade performance while remaining imperceptible or semantically valid. 2.2. White-box Adversarial Attacks In this work, we prioritise white-box attacks for benchmarking adversarial attacks in tabular data systems for two reasons. First, white-box attacks provide an upper-bound evaluation of vulnerability by assuming worst-case adversarial scenarios, thereby revealing fundamental weaknesses in model design or training. Second, most state-of-the-art attack methods are developed under white-box assumptions, enabling systematic comparisons with existing literature. While gray-box and black-box attacks have practical relevance, white-box analysis offers a rigorous baseline for assessing inherent model robustness before considering real-world constraints on adversarial knowledge. Here are some popular methods widely used for white-box attacks: 2.2.1. Fast Gradient Sign Method The fast gradient sign method (FGSM) attack [15] works by calculating the gradient of the neural network with respect to the input data and using the sign of this gradient to determine the direction of the perturbation. Given an original datapoint xand the corresponding label y, the problem can be formalised as: xadv=x+ϵ·sign(∇xJ(x, y)) (3) where ϵis an initialised hyperparameter for determining the size of perturbation, ∇xJ(·) 8 represents the gradient of with respect to datapoint x. The term ‘fast’ in the context of the FGSM attack refers to the fact that it can be executed quickly with just a single forward pass through the neural network to generate the gradient, and a single backward pass to update the input data with the calculated perturbation. This simplicity and efficiency make it a popular choice for generating adversarial examples in machine learning applications. 2.2.2. Iterative Method Iterative methods are a family of techniques used to produce adversarial examples by introducing perturbations into the input data in a series of small steps rather than a single large step. Two famous examples are the Basic Iterative Method (BIM) and Projected Gradient Descent (PGD) , both derived from
https://arxiv.org/abs/2505.21027v1
FGSM. BIM. The Basic Iterative Method [19] extends FGSM by repeatedly applying gradient- guided perturbations. Starting with the original input xadv 0=x, it iteratively updates adversarial examples by ascending the loss gradient while constraining perturbations within a predefined ϵ-ball. For the ith iteration, the update formulations are following: xadv 0=x, xadv i+1=Clip{xadv i+α·sign(∇xJ(xadv i, y))}(4) where αcontrols the magnitude of perturbation. Also, the perturbation is clipped to prevent it from becoming too large and potentially distorting the input beyond recognition. PGD. Projected Gradient Descent [20] generalises BIM by incorporating two key modifi- cations: (1) initialising xadv 0as a random point within the ϵ-neighborhood of x, and (2) projecting perturbed samples back to the feasible region after each update: xadv 0=Random (x), xadv i+1=Proj{xadv i+α·sign(∇xJ(xadv i, y))}(5) Here, Proj ensures xadvremains within the the ϵ-bound via L∞-norm projection. Unlike BIM’s deterministic initialisation, PGD’s stochastic start helps escape local optima, making 9 it a more robust attack framework. BIM can be viewed as a special case of PGD without random initialisation or explicit projection steps. 2.2.3. Carlini and Wagner Attack The Carlini and Wagner attack [21], also known as the C&W attack, is a state-of- the-art adversarial attack that aims to find the minimum perturbation that can cause a misclassification in a targeted deep neural network. Unlike some other attacks, the C&W attack is designed to optimise a loss function that combines both the perturbation magnitude with distance metrics and the prediction confidence with objective function, which is: arg min xadv∥x−xadv∥p+c·z(xadv) (6) where z(xadv) is the objectve function. This approach enables the C&W attack to be effective even against models that are robust to other types of attacks. Moreover, the C&W attack can also be adapted to perform targeted attacks, where the attacker aims to cause the model to predict a specific target class. 2.2.4. DeepFool DeepFool [22] constitutes an attack method based on the ℓ2-norm. Its underlying as- sumption is that the predictive model is linear and that a hyperplane F={x:wTx+b= 0} exists that separates one class from another. Consequently, the search for adversarial exam- ples can be framed as an optimization problem, expressed as follows: r∗(x) = arg min xadv∥xadv−x∥2=−f(x0) ∥w∥2 2w subject to f(x)̸=f(xadv),(7) where r∗(x) is the minimum perturbation to change the class of x. It measures the closest distance from datapoint to the decision boundary hyperplane (shown in Figure 1). 10 Figure 1: DeepFool attacks in a linear binary classifier [22]. 2.2.5. LowProFool LowProFool is proposed by Ballet et al. [23], an updated version of DeepFool for tabular data, which uses a weighted ℓpnorm to determine the set of features to perturb. This attack utilizes the absolute value of the Pearson’s correlation coefficient for each numerical feature as feature importance v. Specifically, this metric is utilized to identify which features are comparatively inconspicuous and more challenging for human observers to detect. r∗(x) = arg min xadvd(xadv−x) =∥(xadv−x)⊙v∥2 p subject to f(x)̸=f(xadv)(8) 2.3. Adversarial Machine Learning on Tabular Data Adversarial attacks on tabular data aim to manipulate inputs such that machine learn- ing models produce incorrect predictions while keeping modifications
https://arxiv.org/abs/2505.21027v1
imperceptible. These attacks can be categorised into white-box (requiring full model access) and black-box (re- quiring only query-based access) methods. This review groups adversarial attack methods accordingly and presents their key strengths and weaknesses. White-box attacks leverage full access to the model’s internal mechanisms, making them highly effective in generating optimised adversarial perturbations. One of the most funda- mental gradient-based white-box methods is Projected Gradient Descent (PGD), introduced by Madry et al. [20], which iteratively perturbs input features in a direction that maximizes 11 classification error. Building upon PGD, Simonetto et al. [24] developed Constrained Adap- tive PGD (CAPGD), which modifies traditional gradient attacks to account for feature con- straints, ensuring that adversarial perturbations remain realistic within a tabular dataset’s domain constraints. Another prominent gradient-based attack is LowProFool attack [23]. This method selectively perturbs low-importance features, ensuring that adversarial modi- fications remain imperceptible to human scrutiny while still deceiving the model. Beyond gradient-based attacks, some white-box approaches incorporate additional con- straints to align adversarial perturbations with real-world feasibility. Mathov et al. [25] proposed a method that preserves tabular feature interdependencies by ensuring that mod- ifications remain consistent with the underlying data structure. Similarly, FENCE [26] introduced a framework for crafting adversarial examples in security-related datasets, where feature dependencies must be respected to maintain feasibility. Other constraint-aware at- tacks focus on financial constraints, such as cost-aware adversarial framework [27], which generates adversarial examples that adhere to a given budget, ensuring that attacks are feasible from an economic standpoint. Generative model-based attacks also play a role in white-box strategies. Zhou et al. [28] leveraged generative adversarial networks (GANs) to create adversarial examples that pre- serve statistical properties of the original dataset. This technique enhances attack stealth by ensuring that adversarial modifications follow the distributional characteristics of clean data, making detection more difficult. Despite their high attack success rates, white-box methods have notable limitations. They assume full knowledge of the target model, which is often unrealistic in real-world applications where machine learning models are deployed in black-box environments. Additionally, white-box attacks can be countered through ad- versarial training, wherein models are retrained with adversarial examples to improve their robustness. Unlike white-box attacks, black-box attacks do not require access to model parameters or gradients. Instead, they rely on query-based techniques to infer decision boundaries and generate adversarial perturbations. One of the earliest and most well-known black-box methods is the Zero-Order Optimisation (ZOO) attack [29], which approximates gradients 12 using finite-difference methods. However, ZOO is computationally expensive due to the large number of queries required to estimate the gradient with high precision. Other decision- based attacks, such as Boundary Attack [30] take an adversarial approach by starting with a large perturbation and iteratively refining it while maintaining misclassification. A more refined version of this approach is the HopSkipJump Attack (HSJA) [31], which adapts decision-based attacks using dynamic step sizes to minimise queries while still achieving high attack success rates. In addition to query-based attacks, some black-box methods adopt model-agnostic ap- proaches that do not require direct access to model gradients. Feature Importance Guided Attack (FIGA) [32] perturbs
https://arxiv.org/abs/2505.21027v1
the most influential features of a dataset without relying on internal model parameters. By focusing on high-importance features, FIGA maximises the likelihood of misclassification while minimising the number of modified features. Cartella et al. [33] extended black-box adversarial attacks into real-world fraud detection systems by adapting boundary-based methods to bypass anomaly detection algorithms. These adap- tations enable adversarial perturbations to remain undetected while still achieving model evasion. Despite their practical advantages, black-box attacks have inherent limitations. They often require a large number of queries to approximate gradients, making them computa- tionally expensive and slow. Moreover, query-efficient black-box attacks typically struggle to generate minimal perturbations, resulting in adversarial examples that are more perceptible compared to their white-box counterparts. Nevertheless, black-box attacks are more appli- cable in real-world scenarios, as attackers typically do not have access to model internals. 3. Methodology Integrating adversarial attacks as an additional phase in the standard machine learning pipeline enables a systematic assessment of model robustness against adversarial perturba- tions. Inspired by the established machine learning benchmark guidelines [34], we design our benchmark to evaluate adversarial attacks on tabular data. Figure 2 provides an overview of the proposed evaluation framework for this benchmark. 13 Figure 2: Overview of the evaluation framework for benchmarking adversarial attacks on tabular data. 14 3.1. Datasets When selecting a dataset for the benchmark, several key criteria must be considered to ensure its suitability and effectiveness. Firstly, the dataset should be appropriate for classifi- cation tasks and include at least two classes to support the evaluation of binary classification. Secondly, to maintain computational feasibility and prevent excessive dimensionality, the to- tal number of features, including those generated by one-hot encoding of categorical features, should not exceed 5000. Moreover, the dataset should be representative of real-world scenar- ios, with diverse instances to effectively assess model robustness and generalisation. Table 2 presents the data profiles of all 11 datasets used in the benchmark, summarising their key characteristics.1 3.2. Adversarial Attacks We frame our adversarial attack problem based on the threat model taxonomy proposed in [17]. Specifically, we focus on the white-box attack scenario, where the attack algorithms 1WineQuality (White) and WineQuality (Red) originate from the same dataset but are treated as two separate datasets in this benchmark. Table 2: Data profiles of the 11 datasets used in the benchmark, including the total number of instances (Ntotal), instances for training ( Ntrain), validation ( Nvalidate ), and adversarial perturbation ( Ntest), as well as the number of numerical features ( xnum), categorical features ( xcat), one-hot encoded (categorical) features ( xencoded ), and the total number of features ( xtotal). Dataset Ntotal Ntrain Nvalidate Ntest xnum xcatxencoded xtotal Adult 32561 22792 3256 6513 6 8 99 105 Electricity 45312 31717 4532 9063 7 1 7 14 COMPAS 16644 11650 1665 3329 8 8 50 58 Higgs 1000000 700000 100000 200000 28 0 0 28 house 16H 22784 15948 2279 4557 16 0 0 16 jm1 10885 7619 1089 2177 21 0 0 21 BreastCancer 569 398 57 114 30 0 0 30 WineQuality-White 4898 3428 490 980 11
https://arxiv.org/abs/2505.21027v1
0 0 11 WineQuality-Red 1599 1119 160 320 11 0 0 11 phoneme 5404 3782 541 1081 5 0 0 5 MiniBooNE 130064 91044 13007 26013 50 0 0 50 15 have access to both the dataset and the predictive model’s configuration. The objective of the adversarial attack is to deceive the predictive model’s predictions. Notably, our benchmark does not enforce targeted misclassification, and all experiments generate untargeted attacks. In practice, the choice of attack methods for structured data is based on various consid- erations such as their efficacy, efficiency, and complexity. However, it should be noted that a systematic review or benchmark on adversarial attacks on structured data is currently lacking. Consequently, the approach taken has been to explore existing attack benchmarks on images and then screen the attacks to identify those that can be extended to tabular data. Furthermore, most existing attack methods are designed for white-box settings, which allow the attacker to generate highly effective and efficient adversarial examples. Taking into account these factors, our selection of attack methods is guided by the following criteria: 1. The selected attack methods should be applicable to tabular data. 2. The selected attack methods should be designed for white-box attack. Based on these criteria, we have identified five attack methods that are vary in com- plexity and approach and have demonstrated high effectiveness and efficiency in the field of computer vision. These include three unbounded attacks—FGSM, BIM, and PGD, and two bounded attacks—DeepFool and C&W. In addition, we include Gaussian noise as a baseline to evaluate the impact of random noise on model performance, providing a reference point to assess the effectiveness of intentional perturbation techniques in the selected attack methods compared to simple noise injection. 3.3. Predictive Models When selecting machine learning models for adversarial attack benchmarking on tabular datasets, three crucial criteria guide our choices: diversity (spanning classical and modern architectures), interpretability (balancing transparency with complexity), and performance (ensuring competitive accuracy). Based on these principles, we evaluate four representative models: 1.Logistic Regression (LR): A simple, interpretable linear baseline that establishes performance lower bounds and vulnerability benchmarks. 16 2.Multilayer Perceptron (MLP): A foundational neural network adept at capturing nonlinear patterns, offering a mid-complexity comparison point. 3.TabTransformer [35]: An attention-based model that processes tabular features via transformer layers, leveraging contextual relationships among features. 4.FTTransformer [36]: A transformer-based architecture that tokenizes numerical and categorical features, enabling unified processing through self-attention mechanisms. 3.4. Evaluation Metrics In the context of machine learning, especially in adversarial settings, a successful attack occurs when the model’s predictions are manipulated or altered to produce incorrect or unintended results. From common practice, the attack success rate (Eq. 9) is used to measure the effectiveness of an adversarial attack. It represents the percentage of instances xiin a dataset for which the attack is successful in misleading the predictive model or causing misclassifications. Attack Success Rate =1 nnX i=11(xadv i̸=yi) (9) Considering the characteristics of tabular data, besides the traditional effectiveness met- rics of adversarial attacks, the metrics of imperceptibility are also be included in the bench- mark. According to our previous
https://arxiv.org/abs/2505.21027v1
research, four quantitative metrics of imperceptibility can be employed, including proximity, sparsity, sensitivity and deviation. 3.4.1. Proximity •Definition: Average distance between inputs and generated adversarial examples. •Purpose: Measures how close the adversarial examples are to the original inputs in terms of Euclidean distance. •Considerations: Lower proximity values indicate better imperceptibility. 17 ℓp(xadv,x) =∥xadv−x∥p=  Pn i=1(xadv i−xi)p1/p , p∈ {1,2} supn|xadv n−xn|, p → ∞(10) 3.4.2. Sparsity •Definition: Uses average ℓ0distance to calculate the average number of perturbed features. •Purpose: Quantifies how many features are modified on average in the adversarial examples. •Considerations: Lower sparsity values indicate better imperceptibility. Spa(xadv,x) =ℓ0(xadv,x) =nX i=11(xadv i−xi) (11) 3.4.3. Deviation •Definition: Uses Mahalanobis Distance (MD) to calculate the distance between gener- ated adversarial examples and the original datasets. Thresholds are determined using statistical methods to identify outliers, and the total outlier rate is calculated. •Purpose: Captures how much the adversarial examples deviate from the normal data distribution. •Considerations: Lower MD values and Moutlier rates indicate better imperceptibility. MD(xadv,x) =p (xadv−x)V−1(xadv−x)T (12) MD is a measure of the distance between a point and a distribution, taking into ac- count the covariance structure of the data. In this context, a threshold is required to determine which data points are considered outliers based on their MD. We calculates the critical value for the MD using the chi-squared distribution. Mathematically, the 18 critical value for the MD is determined by first calculating the chi-squared critical valueX2 α,dcorresponding to the desired significance level αand the degrees of freedom d, where dis the number of dimensions or features in the data. This critical value represents the boundary beyond which a certain proportion αof the chi-squared dis- tribution lies. The Mahalanobis distance threshold tis then computed by taking the square root of the chi-squared critical value. If MD value of a data point exceeds the threshold, it suggests that the point is farther away from the center of the distribution than expected and we consider it as outlier. Outlier Rate =1 nnX i=11(MD(xadv,x)> t),where t=q X2 α,d (13) 3.4.4. Sensitivity •Definition: A metric to check if sensitive features are changed, based on distance metrics. •Purpose: Focuses on the impact of the attack on sensitive features, which may be critical for certain applications. •Considerations: The metric should be sensitive to changes in important features. SDV( xi) =rPm(xi−¯xi)2 m SEN(x,xadv) =nX i=1|xadv i−xi| SDV( xi)(14) 4. Evaluation Our evaluation methodology aims to address three primary research questions related to adversarial attacks on tabular data as follows. •RQ1. How effective are the evaluated adversarial attack algorithms on tabular data? In our benchmark, this is measured by the success rates of individual adversarial attack methods in deceiving target model’s classification. 19 •RQ2. How imperceptible are these adversarial attack algorithms on tabular data? Our benchmark evaluates the imperceptibility of individual adversarial attack methods on tabular data in terms of the four key quantitative properties specified in Section 3.4. – RQ2.1 (Sparsity) : How many features are modified in the adversarial examples? – RQ2.2 (Proximity) : How close are the adversarial examples to the original samples in the
https://arxiv.org/abs/2505.21027v1
feature space? – RQ2.3 (Deviation) : How significantly do the modified features differ from their original values? – RQ2.4 (Sensitivity) : How much do perturbations respect narrow-guard feature perturbation? •RQ3. Whether and how can the evaluated algorithms achieve a balance between both imperceptibility and effectiveness? Based on the benchmark evaluation, we conduct a trade-off analysis of different adversarial attack methods to identify those that strike the best balance between both effectiveness and imperceptibility. 4.1. Experiment Setup 4.1.1. Datasets and Preprocessing Following the dataset selection criteria outlined in Section 3.1, we implement a stan- dardised preprocessing pipeline for all benchmark datasets. Each dataset is first partitioned using stratified sampling to maintain class distributions, allocating 70% for training, 10% for validation, and 20% for testing and adversarial evaluation. We fix the random seed (42) throughout this process to ensure reproducibility. For feature engineering, we remove constant and duplicate features, then address missing values through median imputation for numerical features and mode imputation for categori- cal variables. Categorical features are transformed via one-hot encoding, while all numerical features are normalised to the [0 ,1] range using min-max scaling. This consistent preprocess- ing approach ensures fair comparison across models while preserving each dataset’s inherent characteristics documented in Table 2. 20 4.1.2. Models Based on the methods in previous section, we select four predictive models in our research. LR is an representation of transparent model. Other three deep learning models are MLP, TabTransformer and FT-Transformer, which are hard for human beings to explain them. All models were trained for 20 epochs with a batch size of 512, optimised via Adam (learning rate=1e-3) using cross-entropy loss. For regularisation, we applied dropout (p=0.2) to MLP and both transformer models. The MLP uses ReLU activations, while the transformers employ ReLU in their feed-forward components. Transformer-specific configurations include 8 attention heads per layer and 6 stacked layers, with all embeddings dimensioned to match the MLP’s hidden layer widths (64 units) for fair comparison. 4.1.3. Adversarial Attack Configuration To thoroughly evaluate model robustness, employs five white-box attack methods, in- cluding the foundational FGSM attack along with its iterative variants (BIM and PGD), plus optimisation-based approaches (DeepFool and C&W). We test these across a carefully designed spectrum of perturbation budgets ( ϵ∈ {0,0.01,0.03,0.05,0.1,0.3,0.5,1}), where ϵ= 0) serves as our natural accuracy baseline. This graduated approach allows us to pre- cisely characterise how different models degrade under increasing adversarial pressure. To provide a thorough evaluation, each attack algorithm is configured with specific hyper- parameters tailored to its design. The FGSM attack represents our simplest case, applying the full ϵperturbation in a single gradient step. This provides a fundamental benchmark against which we compare more sophisticated methods. For iterative attacks (BIM/PGD), we implement a relative step size strategy where each of the T= 10 iterations applies a perturbation of magnitude ϵ/T, ensuring controlled approach to the total budget. The DeepFool attack employs an iterative boundary-crossing approach with carefully calibrated parameters: a maximum of 50 iterations ensures convergence while maintaining computational efficiency, and a 2% overshoot factor (0.02) guarantees reliable crossing of decision boundaries. We configure
https://arxiv.org/abs/2505.21027v1
it to evaluate 10 candidate classes per iteration and oper- ate directly on model logits, providing precise gradient information for minimal adversarial 21 perturbations. For the C&W attack, we implement a rigorous optimisation process controlled by three key parameters: (1) 10 binary search steps to optimally scale the penalty constant, (2) an initial constant value of 0.001 for gradual constraint adjustment, and (3) zero-confidence (κ= 0) attacks to produce minimally-perturbed adversarial examples. Each binary search phase executes 10 optimisation steps with a learning rate of 0.1, balancing attack success rate with computational cost. 4.2. RQ1: How effective are the evaluated adversarial attack algorithms on tabular data? We first evaluate model accuracy to understand the performance of predictive models before exposing them to adversarial attacks. If the model’s accuracy is too low, such as below 60%, it may be easily deceived even without attacks, making further evaluation unnecessary. Table 3 presents the accuracy results for the four selected models across 11 datasets. As shown, three deep learning models generally outperform Logistic Regression (LR) on most datasets; whereas, for certain datasets, including Adult, jm1, and COMPAS, LR achieves similar performance. Overall, the models demonstrate sufficient accuracy, with all exceeding 63%, making them suitable for further adversarial testing. Table 3: Model accuracy of four predictive models across 11 datasets. Dataset LR MLP TabTransformer FTTransformer Adult 0.834 0.8337 0.8328 0.799 BreastCancer 0.9386 0.9737 0.9035 0.9737 Compas 0.6654 0.6738 0.7053 0.6858 Electricity 0.6607 0.7635 0.762 0.7712 Higgs 0.6366 0.7234 0.6951 0.7296 MiniBooNE 0.7724 0.8372 0.8497 0.8402 WineQuality-Red 0.7219 0.7344 0.7281 0.7344 WineQuality-White 0.6745 0.7469 0.7316 0.752 house 16H 0.7029 0.8578 0.8251 0.8466 jm1 0.8075 0.8098 0.8066 0.8107 phoneme 0.7095 0.7872 0.7882 0.8002 22 Figure 3: Attack success rate of evaluated attack methods on all three mixed datasets and four ML models. Mixed Datasets. Figure 3 indicates the Electricity dataset demonstrates unique character- istics compared to other mixed datasets. Here, all attack methods achieve notably high success rates across model architectures, with even the typically underperforming C&W at- tack reaching nearly 100% success on LR, MLP, and TabTransformer. This suggests that the feature distribution or model decision boundaries for this dataset may be particularly susceptible to adversarial manipulation. Interestingly, while other datasets show clear dif- ferentiation between attack types, Electricity exhibits more uniform patterns across attack methods. In contrast, the Adult and Compas datasets show clearer distinctions between attack effectiveness. The ℓ∞-based attacks (FGSM, PGD, and BIM) consistently outperform ℓw- based attacks (C&W and DeepFool) on these datasets. This performance gap suggests that the bounded perturbation approach of ℓ∞attacks is particularly effective for these mixed- 23 type datasets, possibly due to their ability to make targeted changes to critical features without being constrained by the ℓ2norm’s emphasis on overall perturbation magnitude. From a model architecture perspective, Transformer-based models demonstrate greater robustness against adversarial attacks compared to traditional approaches. Both TabTrans- former and FTTransformer require higher epsilon values to achieve the same attack success rates as seen with LR and MLP, particularly on the Adult dataset. This suggests that the attention mechanisms and deeper architectural features of transformers may
https://arxiv.org/abs/2505.21027v1
provide some inherent robustness to adversarial perturbations when handling mixed tabular data. Numerical Datasets. As shown in Figure 4, 5 and 6, our analysis of eight numerical datasets reveals more consistent patterns compared to mixed datasets, though with several dataset- specific characteristics worth noting. The Higgs and house 16H datasets (Figure 4) exhibit remarkably uniform vulnerability toℓ∞-based attacks across model architectures. On these datasets, FGSM, PGD, and BIM produce nearly identical attack success curves, suggesting that the simpler FGSM approach may be sufficient for compromising models trained on these data distributions. The jm1 Figure 4: Attack success rate of evaluated attack methods on two (out of eight) numerical datasets and four ML models. 24 Figure 5: (Cont.) Attack success rate of evaluated attack methods on another three (out of eight) numerical datasets and four ML models. dataset demonstrates a fascinating threshold phenomenon where attack success suddenly jumps from baseline to nearly perfect across multiple attack methods and models. This sharp transition suggests a critical vulnerability point in the feature space where slight perturbations beyond a specific threshold completely undermine model performance. The BreastCancer dataset (Figure 5) provides perhaps the most diverse response to dif- ferent attack methods. Here, PGD shows superior performance when targeting transformer- based models, while the FTTransformer exhibits unusual non-monotonic vulnerability pat- terns where attack success sometimes decreases at higher epsilon values. This counter- intuitive behaviour suggests potential overfitting of attack algorithms to specific decision boundary regions or gradient masking effects in the transformer architecture. Both WineQuality datasets (Red and White, Figure 5 and 6) show that ℓ∞-based at- 25 Figure 6: (Cont.) Attack success rate of evaluated attack methods on the remaining three (out of eight) numerical datasets and four ML models. tacks require substantially lower epsilon values to achieve high success rates compared to ℓ2-based approaches. DeepFool eventually reaches comparable performance but demands significantly higher perturbation budgets, making it less efficient from an adversarial per- spective. The phoneme and MiniBooNE datasets (Figure 6) further confirm the superiority ofℓ∞-based attacks, with all three methods (FGSM, PGD, BIM) demonstrating nearly identical performance trajectories. From an architectural standpoint, LR models consistently demonstrate the highest vul- nerability across numerical datasets, often exhibiting sharp threshold effects where attack success rates increase dramatically at specific epsilon values. This suggests that the linear de- cision boundaries of logistic models may be easier to exploit with minimal perturbations. In contrast, MLP and transformer models typically require larger epsilon values before showing 26 significant compromise, though the specific patterns vary by dataset. The C&W attack shows highly inconsistent performance across numerical datasets, some- times barely outperforming the random noise baseline while occasionally achieving compet- itive results on specific model-dataset combinations. This variability suggests that C&W’s effectiveness is highly dependent on the specific characteristics of the dataset and model architecture, making it a less reliable general-purpose attack for numerical tabular data. Overall, our findings on numerical datasets indicate that ℓ∞-based attacks provide the most consistent and efficient approach for compromising tabular models, with FGSM often performing similarly to the more computationally intensive PGD and BIM methods. 4.3.
https://arxiv.org/abs/2505.21027v1
RQ2: How imperceptible are these adversarial attack algorithms on tabular data? Based on our analysis of attack success rates across varying ϵvalues, we establish a systematic approach for selecting optimal attack budgets. For each experimental setting, we identify the value at which attack success rates first reach a plateau (the stationary point of the curve), beyond which further increases in ϵyield negligible performance improvements. The specific attack budgets selected through this methodology are detailed in Appendix Appendix A. However, direct comparisons across different models and datasets would be methodologically unsound, as optimal ϵvalues vary significantly between these contexts. To ensure fair and consistent benchmarking, we address this variation by identifying the most frequently occurring ϵvalue for each attack method across all tested models and datasets, as presented in Table 4. These representative ϵvalues serve as our standardised benchmark parameters for subsequent comparative analyses in RQ2 andRQ3 . Table 4: Standardised ϵvalue settings for each attack method used in the analysis of RQ2 andRQ3 . Attacks Gaussian FGSM BIM PGD C&W DeepFool ϵ 1 0.3 0.3 0.3 1 1 4.3.1. RQ2.1: How many features are modified in the adversarial examples? Our analysis of sparsity patterns reveals distinct behavioural characteristics among ad- versarial attack methods while highlighting the influence of dataset dimensionality and model 27 architecture. The sparsity measure quantifies the proportion of features modified in adver- sarial examples, with higher values indicating more features being perturbed. Sparsity – Mixed Datasets. Our analysis of sparsity patterns in mixed datasets (Figures 7, 8 and 9) reveals complex interaction effects between feature types, attack algorithms, and model architectures, providing critical insights into the selective vulnerability of different feature categories to adversarial perturbation. In datasets with categorical feature dominance like Adult (105 total features, 99 categor- ical, Figure 7) and Compas (58 total features, 50 categorical, Figure 8), an articulate divide in treatment between categorical and numerical features is evident across attack methods. FGSM, BIM, C&W, and DeepFool demonstrate a strong numerical feature bias when at- tacking neural network models (MLP, TabTransformer, FTTransformer), with sparsity rates (a) All features (b) Categorical Features (c) Numerical Features Figure 7: Sparsity results of five evaluated attack methods and four ML models on the Adult dataset. 28 (a) All features (b) Categorical Features (c) Numerical Features Figure 8: Sparsity results of evaluated attack methods and four ML models on the COMPAS dataset. for categorical features near zero (0-1%) while maintaining high sparsity rates for numerical features (72-99%). This pronounced selectivity indicates these attacks algorithmically priori- tise numerical features, effectively ignoring categorical dimensions despite their prevalence in the feature space. PGD stands apart as the only attack capable of consistently modifying categorical fea- tures across all models and datasets, achieving approximately 50% sparsity on categorical features regardless of dataset composition. This unique capability suggests PGD’s per- turbation mechanism operates fundamentally differently from other ℓ∞-based approaches, likely due to its projection mechanism that allows effective navigation of the discrete space represented by one-hot encoded categorical features. The LR model exhibits markedly different vulnerability patterns from neural architec- 29 (a) All features (b) Categorical
https://arxiv.org/abs/2505.21027v1
Features (c) Numerical Features Figure 9: Sparsity results of evaluated attack methods and four ML models on the Electricity dataset. tures. When attacked by FGSM, PGD, and BIM, LR models show moderate categorical feature sparsity (40-52%) across all mixed datasets, suggesting these models encode in- formation differently, making categorical features more susceptible to perturbation. This architectural effect is particularly evident in the Compas dataset, where categorical feature sparsity rates for LR (52%) significantly exceed those of neural networks (0%) when attacked by FGSM and BIM. The Electricity dataset (14 total features with equal distribution of 7 categorical and 7 numerical features, Figure 9) offers a unique perspective on attack behaviour in balanced feature spaces. Here, ℓ∞-based attacks maintain their numerical feature bias despite equal feature distribution, with FGSM and BIM achieving 99% sparsity on numerical features while leaving categorical features unmodified (0% sparsity) when attacking neural networks. 30 This persistent selectivity in a balanced feature environment further confirms the algorithmic preference of these attacks for continuous variables over discrete ones. C&W attacks demonstrate consistently low overall sparsity rates across mixed datasets (0-2% on most models), but exhibit moderate performance on numerical features in the Elec- tricity dataset (64-69%). This selective numerical targeting despite general low effectiveness highlights how ℓ2-based attacks struggle with the mixed feature landscape of tabular data, particularly with one-hot encoded categorical variables. DeepFool shows minimal categorical feature modification (0-0.4%) across all neural net- work models while achieving moderate sparsity on numerical features (40-74%), position- ing it as even more numerically-focused than other attacks. This extreme preference for numerical features suggests DeepFool’s gradient-based optimisation may be fundamentally incompatible with the discrete nature of categorical variables in tabular data. These findings collectively demonstrate that with the exception of PGD, current adver- sarial attacks on tabular data exhibit a strong inherent bias toward perturbing numerical features while largely ignoring categorical dimensions, regardless of their prevalence in the feature space. Sparsity – Numerical Datasets. For numerical datasets, as illustrated in Figure 10 and 11, we observe distinct patterns of feature perturbation across different attack methods. FGSM, PGD, and BIM consistently demonstrate high sparsity rates, modifying nearly all features of the original inputs (approximately 80-100% sparsity) across most models and datasets. This comprehensive modification approach persists regardless of feature dimensionality— from the low-dimensional phoneme dataset (5 features, Figure 11a) to the high-dimensional MiniBooNE dataset (50 features, Figure 11b). PGD exhibits particularly aggressive feature modification, achieving near-perfect sparsity (99-100%) in many configurations, especially with TabTransformer models. In contrast, C&W attacks display highly selective and context-dependent behaviour. C&W sparsity rates range dramatically from 0% (no features modified) in extreme cases— such as the LR model on phoneme and jm1 (Figure 10c) datasets—to moderate rates (50- 31 74%) in most other configurations. This variability suggests that C&W’s optimisation ap- proach identifies and targets only the most influential features for classification disruption. The C&W attack shows particularly low sparsity rates on the MiniBooNE dataset (5-28%) compared to smaller datasets, suggesting increased selectivity in higher-dimensional feature spaces. DeepFool consistently occupies an intermediate position in feature modification strategy, with sparsity
https://arxiv.org/abs/2505.21027v1
rates typically ranging from 17-80%. This attack shows its most selective behaviour on the Higgs dataset (17-24%, Figure 11b) and more moderate selectivity on other datasets. Interestingly, DeepFool’s sparsity rates appear least affected by model architecture differences, maintaining relatively consistent modification patterns across different models for the same dataset. Model architecture significantly influences adversarial sparsity patterns. The LR model experiences the most extreme variations in feature modification, particularly with C&W and DeepFool attacks. TabTransformer shows notable variability in response to different attacks, while MLP models generally exhibit more consistent sparsity rates. The Breast- Cancer dataset (Figure 10d) uniquely demonstrates high sparsity rates (¿80%) for all attack methods and model combinations, suggesting that all features in this dataset are relevant to the classification task. Feature dimensionality appears inversely correlated with sparsity rates for ℓ2-based at- tacks, with C&W and DeepFool showing increased selectivity (lower sparsity) on larger datasets. This dimensional effect is particularly pronounced for C&W attacks, which mod- ify only 5-28% of features on the 50-feature MiniBooNE dataset compared to 50-74% on smaller datasets like phoneme (5 features These findings reveal fundamental algorithmic differences in adversarial feature selection: ℓ∞-based methods distribute perturbations broadly across the feature space, while ℓ2-based methods strategically modify subsets of features. 32 (a) Higgs (b) house 16H (c) jm1 (d) BreastCancer (e) WineQuality-White (f) WineQuality-Red Figure 10: Sparsity results of evaluated attack methods and four ML models on six (out of eight) numerical datasets. 33 (a) phoneme (b) MiniBooNE Figure 11: (Cont.) Sparsity results of evaluated attack methods and four ML models on the remaining two (out of eight) numerical datasets. 4.3.2. RQ2.2: How close are the adversarial examples to the original samples in the feature space? Our proximity analysis measures how close adversarial examples remain to their original samples in the feature space using ℓ2distance metrics. The heatmaps in Figures 12, 13 and 14 reveal distinct patterns across attack types, model architectures, and datasets that provide important insights into the imperceptibility of different adversarial approaches. Proximity – Mixed dataset. The proximity results for mixed datasets (Adult, Electricity, and Compas) demonstrate clear differences between ℓ2-based and ℓ∞-based attack algorithms. In the Adult dataset (Figure 12a), we observe a substantial proximity advantage for ℓ2-based attacks across all model architectures. C&W consistently produces the closest ad- versarial examples to original samples, with remarkably low ℓ2distances ranging from 0.00 to 0.13, followed by DeepFool with distances between 0.30 and 0.41. In contrast, ℓ∞-based attacks generate examples significantly further from originals, with distances typically rang- ing from 0.64 to 1.96. PGD is particularly notable for creating the most distant adversarial examples, reaching an exceptional ℓ2distance of 4.42 in the MLP model and 2.25 in the FTTransformer. This extreme difference suggests that PGD’s optimisation approach, while effective at finding adversarial examples, sacrifices proximity considerably compared to other methods in this dataset. 34 (a) Adult (b) Electricity (c) Compas Figure 12: Proximity results of evaluated attack methods and four ML models on all three mixed datasets. The Electricity dataset (Figure 12b) exhibits lower ℓ2distances overall compared to Adult, but maintains the same pattern of
https://arxiv.org/abs/2505.21027v1
ℓ2-based attacks preserving significantly better proximity. The distance gap between attack types is most pronounced in LR models, where ℓ∞-based attacks produce distances around 0.89, while C&W and DeepFool achieve distances of only 0.17 and 0.10 respectively. Interestingly, all neural network architectures demonstrate similar proximity values within each attack type, suggesting that model complexity has min- imal impact on proximity in this dataset. This could indicate that the Electricity dataset’s feature space permits finding closer adversarial examples regardless of model architecture. For the Compas dataset (Figure 12c), we again observe the ℓ2-based attacks’ superior proximity performance, but with interesting model-specific variations. While C&W achieves remarkable proximity (distances between 0.00 and 0.07), PGD demonstrates extremely poor 35 proximity in certain model architectures, particularly with FTTransformer where it reaches 3.14 – the highest ℓ2distance in all mixed datasets. This suggests that certain combi- nations of dataset characteristics, model architectures, and attack algorithms can produce significantly outlying proximity behaviours. Proximity – Numerical dataset. Our analysis of numerical datasets reveals both consistent patterns and intriguing variations in proximity metrics across the eight datasets examined. The Higgs dataset (Figure 13a) demonstrates uniformly small ℓ2distances across all attack methods and models, with distances ranging from 0.01 to 0.25. While ℓ2-based attacks still maintain better proximity (0.01-0.07) than ℓ∞-based attacks (0.19-0.25), the difference is less pronounced than in mixed datasets. This suggests that the Higgs feature space may be structured in a way that adversarial examples can be found relatively close to original samples regardless of attack methodology. In contrast, the house 16H dataset (Figure 13b) shows much greater variability in prox- imity across model architectures. LR and TabTransformer models exhibit substantially higher ℓ2distances for ℓ∞-based attacks (0.91-0.95) compared to MLP and FTTransformer models (0.29-0.36). This pattern suggests that model architecture plays a significant role in determining proximity characteristics for this dataset, potentially due to differences in decision boundary complexity. The jm1 dataset (Figure 13c) reveals an interesting interaction between attack algorithms and model architectures. While ℓ∞-based attacks maintain consistent ℓ2distances (1.13) for LR model, their proximity varies significantly for TabTransformer, with distances ranging from 0.35 for BIM to 0.96 for FGSM. This three-fold difference in proximity despite all attacks using the same ℓ∞constraint highlights how attack optimisation approaches interact differently with various model architectures. The BreastCancer dataset (Figure 13d) demonstrates some of the highest ℓ2distances overall among numerical datasets, particularly for ℓ∞-based attacks against LR around 1.49). PGD again produces exceptional distances in certain cases, reaching 2.11 with FTTransformer—more than six times the distance of BIM (0.34) on the same model. This 36 extreme variation suggests that PGD’s optimisation approach can sometimes explode the ℓ2 distance while pursuing adversarial examples under an ℓ∞constraint. The WineQuality datasets (White and Red, Figure 13e and 13f) display moderate ℓ2 (a) Higgs (b) house 16H (c) jm1 (d) BreastCancer (e) WineQuality-White (f) WineQuality-Red Figure 13: Proximity results of evaluated attack methods and four ML models on six (out of eight) numerical datasets. 37 (a) phoneme (b) MiniBooNE Figure 14: Proximity results of evaluated attack methods and four ML models on the remaining
https://arxiv.org/abs/2505.21027v1
two (out of eight) numerical datasets. distances for ℓ∞-based attacks, but with notable attack-specific patterns. In WineQuality- White, PGD produces consistently higher ℓ2distances compared to FGSM and BIM across all model architectures, most pronounced in LR model (1.33 versus 0.89). However, this pattern is less evident in WineQuality-Red, suggesting dataset-specific interactions with attack algorithms. The phoneme dataset (Figure 14a) exhibits significant model-dependent proximity varia- tions. For LR, PGD again produces the highest ℓ2distance (1.27), while for TabTransformer, FGSM generates the most distant examples (1.05). C&W achieves exceptional proximity across all models, reaching zero for LR and 0.10 for transformer models. This highlights C&W’s effectiveness at finding minimal-distance adversarial examples due to its direct ℓ2 optimisation objective. The MiniBooNE dataset (Figure 14b) continues the trend of PGD generating the most distant adversarial examples, with PGD producing ℓ2distances of 1.62 for LR and 1.35 for FTTransformer, significantly higher than other attack methods on the same models. These extreme values for PGD across multiple datasets suggest a fundamental characteristic of its optimisation approach that consistently sacrifices proximity for adversarial effectiveness. From a model architecture perspective, LR generally exhibits the highest ℓ2distances across datasets, particularly for ℓ∞-based attacks. This suggests that the linear decision 38 boundaries of logistic models may require larger perturbations to cross, resulting in less proximate adversarial examples. Transformer-based models show more variable proximity patterns across datasets, sometimes exhibiting better proximity than simpler models (as in jm1) and sometimes worse (as in BreastCancer with PGD). Overall, our proximity analysis confirms that ℓ2-based attacks consistently generate ad- versarial examples that remain closer to original samples compared to ℓ∞-based attacks. This aligns with their objective functions since L2 attacks directly optimise for minimal distance, while ℓ∞attacks focus on limiting the maximum change to any individual feature. Among ℓ∞-based attacks, PGD frequently produces the most distant examples, suggesting its iterative process and strong adversarial optimisation may come at a significant cost to proximity. 4.3.3. RQ2.3: How significantly do the modified features differ from their original values? Our deviation analysis examines how significantly the adversarial examples differ from the original data distribution. The heatmaps presented in Figure 15 - 17 reveal clear patterns in the outlier rates produced by different attack algorithms across model architectures and datasets. Deviation – Mixed dataset. On the Adult dataset (Figure 15a), we observe a striking di- chotomy: all three ℓ∞-based attacks (FGSM, PGD, and BIM) generate adversarial examples with outlier rates consistently at or near 100% across all model architectures. This indicates that these attacks produce perturbations that push samples substantially outside their origi- nal feature distributions. In contrast, C&W and DeepFool exhibit significantly lower outlier rates, with C&W ranging from 0.14 to 0.34 and DeepFool from 0.20 to 0.46, depending on the model architecture. This pattern suggests that ℓ2-based attacks tend to preserve the original data distribution more effectively, potentially making them more difficult to detect through distribution-based defences. The Electricity dataset (Figure 15b) presents an interesting deviation from this pattern. While ℓ∞-based attacks still generally produce higher outlier rates than ℓ2-based approaches, the overall rates are lower compared to
https://arxiv.org/abs/2505.21027v1
other mixed datasets. FGSM, PGD, and BIM 39 (a) Adult (b) Electricity (c) Compas Figure 15: Deviation results of evaluated attack methods and four ML models on all three mixed dataset. generate outlier rates ranging from 0.60 to 0.88, with the TabTransformer model showing particular vulnerability to distribution shifts with outlier rates reaching 0.88 for FGSM. Both C&W and DeepFool maintain substantially lower outlier rates across all models (0.02-0.09 for C&W and 0.12-0.39 for DeepFool), reinforcing the trend that ℓ2-based attacks tend to remain closer to the original data distribution. For the Compas dataset, we see the clearest demarcation between attack types. All ℓ∞- based attacks generate perfect 1.00 outlier rates across all model architectures, indicating complete departure from the original feature distributions. Meanwhile, C&W consistently produces the lowest outlier rates (0.06-0.09), and DeepFool generates moderate outlier rates (0.22-0.51) that vary by model architecture. This stark contrast highlights the fundamentally different approaches to perturbation optimisation between ℓ∞andℓ2norm constraints. 40 Deviation – Numerical dataset. The Higgs dataset (Figure 16a) displays the expected pat- tern where ℓ∞-based attacks predominantly produce outlier rates of 1.00, with some ex- ceptions for FTTransformer models where PGD and BIM show reduced rates of 0.67 and (a) Higgs (b) house 16H (c) jm1 (d) BreastCancer (e) WineQuality-White (f) WineQuality-Red Figure 16: Deviation results of evaluated attack methods and four ML models on six (out of eight) numerical datasets. 41 (a) phoneme (b) MiniBooNE Figure 17: (Cont.) Deviation results of evaluated attack methods and four ML models on the remaining two (out of eight) numerical datasets. 0.63, respectively. ℓ2-based attacks maintain significantly lower outlier rates (0.12-0.25), consistent with their tendency to preserve data distributions. The house 16H dataset (Figure 16b) presents an interesting case where ℓ∞-based attacks maintain near-perfect outlier rates across all models, but ℓ2-based attacks also demonstrate relatively high outlier rates compared to other datasets. C&W produces outlier rates ranging from 0.36 to 0.65, while DeepFool generates rates from 0.38 to 0.74. This suggests that the feature distribution of house 16H may be particularly sensitive to perturbations, causing even ℓ2-constrained modifications to push samples outside their original distributions. The jm1 dataset (Figure 16c) exhibits uniformly high outlier rates for ℓ∞-based attacks (1.00 across all models) and surprisingly high rates for ℓ2-based attacks as well, with C&W reaching 0.16-0.78 and DeepFool achieving 0.52-0.83. This indicates that jm1’s feature space may be particularly conducive to generating out-of-distribution samples regardless of the attacks employed. The BreastCancer dataset (Figure 16d) stands out as an anomaly among numerical datasets, with all attack methods producing remarkably high outlier rates. Even C&W and DeepFool, which typically generate in-distribution samples, produce outlier rates ranging from 0.67 to 0.93. This suggests that the BreastCancer dataset may have a particularly compact or tightly clustered feature distribution where even small perturbations can push 42 samples beyond distribution boundaries. The WineQuality datasets (White and Red, Figure 16e and 16f) display contrasting behaviors. WineQuality-White shows high outlier rates for ℓ∞-based attacks (0.98-1.00) and moderate rates for ℓ2-based attacks (0.30-0.73). In contrast, WineQuality-Red exhibits a wider range of outlier rates even for
https://arxiv.org/abs/2505.21027v1
ℓ∞-based attacks, with BIM producing rates as low as 0.82 on TabTransformer. This suggests that the Red variant may have a more dispersed feature distribution that can accommodate certain perturbations while remaining in-distribution. The phoneme dataset (Figure 17a) reveals the most variable behaviour across models and attacks. While LR model remains highly susceptible to distribution shifts from ℓ∞-based attacks (1.00 outlier rates), other models show surprising resistance. BIM produces outlier rates as low as 0.11 on TabTransformer, and even FGSM shows reduced effectiveness on MLP with a 0.71 outlier rate. ℓ2-based attacks maintain their typical low outlier pattern, with rates as low as 0.05 for DeepFool on MLP. Finally, the MiniBooNE dataset (Figure 17b) demonstrates consistently high outlier rates across all attack types and models. Even C&W, which typically preserves distribution characteristics, produces outlier rates from 0.20 to 0.79. This suggests that MiniBooNE may have a feature space where adversarial perturbations, regardless of norm constraints, readily push samples outside their original distributions. From a model architecture perspective, there are notable variations in how different models respond to distribution-shifting attacks. LR models generally exhibit the highest vulnerability to distribution shifts across datasets, particularly for ℓ∞-based attacks. In contrast, transformer-based models occasionally demonstrate some resilience to certain at- tacks, as seen in the phoneme dataset where TabTransformer shows a remarkably low outlier rate (0.11) for BIM. This suggests that the more complex decision boundaries of trans- former architectures may sometimes accommodate certain perturbations while maintaining in-distribution characteristics. Overall, our deviation analysis confirms that ℓ∞-based attacks consistently generate ad- versarial examples that significantly deviate from original data distributions, while ℓ2-based 43 attacks tend to produce more in-distribution perturbations. However, the specific patterns vary notably by dataset characteristics and model architecture, highlighting the complex in- terplay between attack methods and the underlying data structures they attempt to exploit. 4.3.4. RQ2.4: How much do perturbations respect narrow-guard feature perturbation? Our sensitivity analysis examines how adversarial attacks handle narrow-guard feature perturbation, particularly for features with narrow distributions in tabular data. The heatmaps in Figure 18, 19 and 20 reveal complex patterns that vary significantly across datasets, attack algorithms, and model architectures. Rather than showing consistent be- haviours, the sensitivity metrics highlight the contextual nature of how perturbations inter- act with narrowly distributed features. (a) Adult (b) Electricity (c) Compas Figure 18: Sensitivity results of evaluated attack methods and four ML models on all three mixed dataset. 44 Sensitivity – Mixed Dataset. In the Adult dataset (Figure 18a), we observe striking dif- ferences across model architectures. While LR shows relatively low sensitivity scores (0.13) uniformly across ℓ∞-based attacks, more complex models exhibit much higher sensitivity val- ues. TabTransformer, when targeted by FGSM, produces the highest sensitivity score (3.37) among all mixed datasets, indicating that this combination significantly perturbs narrow- distribution features. Most notably, C&W consistently demonstrates the lowest sensitivity scores (0.00-0.06) across all models, suggesting its superior ability to preserve the integrity of narrow-guard features. This aligns with its ℓ2-norm objective function that naturally penalises large deviations in any individual feature. The Electricity dataset (Figure 18b) presents a more
https://arxiv.org/abs/2505.21027v1
uniform sensitivity pattern across model architectures for the same attack method. LR model consistently shows higher sensi- tivity scores (0.41) for ℓ∞-based attacks compared to other models (0.13-0.19). This suggests that simpler model architectures may induce attackers to make more substantial modifica- tions to narrowly distributed features in this dataset. The three ℓ∞-based attacks (FGSM, PGD, and BIM) produce identical sensitivity scores within each model architecture, in- dicating that these attacks, despite their algorithmic differences, alter narrow-distribution features similarly when applied to electricity data. For the Compas dataset (Figure 18c), we observe moderate sensitivity scores overall but with notable variations across model architectures. MLP shows significantly higher sensi- tivity scores (0.99) for FGSM and BIM compared to transformer-based models (0.47-0.61), suggesting that the decision boundaries of MLPs may encourage more aggressive pertur- bations to narrow-distribution features. PGD consistently demonstrates lower sensitivity scores compared to other ℓ∞-based attacks across all models, indicating its potentially more controlled approach to perturbing features with narrow distributions. Sensitivity – Numerical Dataset. The Higgs dataset (Figure 19a) stands out for its remark- ably low sensitivity scores across all attack methods and model architectures (0.00-0.03). This suggests that either the Higgs dataset lacks features with sufficiently narrow distri- butions or that its feature space allows effective adversarial examples without significantly 45 altering narrow-distribution features. The uniformity of scores across different attack meth- ods suggests that the dataset characteristics, rather than attack algorithms, primarily drive the sensitivity outcomes in this case. (a) Higgs (b) house 16H (c) jm1 (d) BreastCancer (e) WineQuality-White (f) WineQuality-Red Figure 19: Sensitivity results of evaluated attack methods and four ML models on six (out of eight) numerical datasets. 46 (a) phoneme (b) MiniBooNE Figure 20: (Cont.) Sensitivity results of evaluated attack methods and four ML models on the remaining two (out of eight) numerical datasets. In stark contrast, the jm1 dataset (Figure 19c) exhibits substantially higher sensitiv- ity scores, particularly for LR models targeted by ℓ∞-based attacks (1.23). This dramatic difference from other numerical datasets indicates that jm1 likely contains several features with narrow distributions that significantly influence model predictions. The sensitivity scores decrease markedly for more complex models, with TabTransformer and FTTrans- former showing progressively lower scores (0.29-0.61) for the same attacks, suggesting that more sophisticated architectures may rely less on narrowly distributed features for their predictions. The WineQuality datasets (White and Red, Figure 19e and 19f) demonstrate moderate sensitivity scores with interesting attack-specific patterns. For WineQuality-White, PGD produces notably higher sensitivity scores (0.31-0.54) compared to other attacks, particularly with LR and FTTransformer models. This suggests that PGD’s iterative approach may target narrow-distribution features more aggressively in this dataset. The WineQuality-Red dataset shows more uniform sensitivity scores across ℓ∞-based attacks but consistently lower values for ℓ2-based attacks. The phoneme dataset (Figure 20a) exhibits some of the highest sensitivity scores among numerical datasets, particularly for transformer models targeted by FGSM (0.80-0.90) and PGD (0.73). This suggests that phoneme’s feature space contains influential narrow-distribution 47 features that transformer models heavily rely on for predictions. Conversely, the MiniBooNE dataset (Figure 20b) shows consistently low sensitivity scores across
https://arxiv.org/abs/2505.21027v1
most attack-model com- binations (0.01-0.06), with only PGD occasionally producing slightly higher values (0.14). From a model architecture perspective, we observe that LR models often show either the highest or the lowest sensitivity scores depending on the dataset, suggesting that the interaction between model simplicity and dataset characteristics strongly influences how narrow-distribution features are perturbed. Transformer-based models show more variable sensitivity patterns across datasets, sometimes exhibiting high sensitivity (as in phoneme) and sometimes low (as in MiniBooNE). Among attack methods, C&W consistently demonstrates the lowest sensitivity scores across nearly all datasets and models, confirming its tendency to preserve the characteristics of narrow-distribution features. PGD shows the most variable behaviour, sometimes produc- ing the highest sensitivity scores (as in WineQuality-White and phoneme) and sometimes moderate values, suggesting its perturbation strategy may be more adaptive to dataset- specific characteristics. The lack of consistent patterns across datasets underscores that sensitivity to narrow- guard feature perturbation is highly contextual, depending on the specific combination of dataset characteristics, model architecture, and attack algorithm. This variability high- lights the importance of dataset-specific evaluation when assessing the imperceptibility of adversarial attacks from a sensitivity perspective, rather than attempting to draw universal conclusions about attack or model behaviours. 4.4. RQ3: Whether and how can the evaluated algorithms achieve a balance between both imperceptibility and effectiveness? Evaluating the relationship between effectiveness and imperceptibility in adversarial at- tacks is crucial for understanding how well an attack balances both aspects. Effectiveness is measured by the attack success rate (RQ1) and imperceptibility is assessed using met- rics including sparsity, proximity, sensitivity, and deviation (RQ2). Rather than comparing effectiveness against each imperceptibility metric individually, our approach evaluates the 48 overall imperceptibility, offering a more comprehensive understanding of its influence on attack success. This method supports the development of more robust adversarial attacks. We propose a weighted harmonic mean to assess comprehensively imperceptibility, and refer to this overall metric as the Imperceptibility Score (IS). This assessment encompasses four metrics: proximity, deviation, sparsity, and sensitivity. This approach enables a nu- anced and balanced evaluation, considering multiple facets of imperceptibility in the overall analysis. The construction of the Imperceptibility Score follows a systematic approach, consisting of the following steps: 1.Metric Definition: Define and establish the four metrics, including proximity, de- viation, sparsity, and sensitivity, each representing distinct aspects of imperceptibility in the context of our evaluation. All four metrics are already defined and employed for evaluation in the prior sections. 2.Weight Assignment: Assign appropriate weights to each metric based on their relative importance in the imperceptibility assessment. Considering the significance of each imperceptibility metric, we set the equal weight for each metric as: proximity :sparsity :deviation :sensitivity = 0.25 : 0 .25 : 0 .25 : 0 .25 3.Score Normalisation: Normalise the scores obtained for each metric to a common scale, such as 0-1, ensuring uniformity and comparability. This step is crucial to prevent biases arising from differences in the measurement scales of individual metrics. Sparsity and deviation can easily be converted into 0-1 scaling by using sparsity rate and outlier rate. For the other two metrics, an
https://arxiv.org/abs/2505.21027v1
auxiliary normalisation function is required. Considering that the possible range of both ℓ2distance and sensitivity is [0,+∞), common normalisation method (Eq. 15) is not suitable since it is hard to seek the max value. xnorm =x−xmin xmax−xmin(15) 49 Practically, we select xnorm =ln(x+ 1) as normalisation function to normalise all four metrics into the same scale. 4.Imperceptibility Score Calculation : Calculate the harmonic mean for the nor- malised scores of the four metrics. The harmonic mean, being more sensitive to lower values, ensures that deficiencies in any individual metric have a noticeable impact on the overall evaluation. IS =nPn i=1wi xi(16) Analysing the relationship between attack success rate (ASR) and imperceptibility score (IS) provides critical insights into the relationship between effectiveness and imperceptibility of adversarial attacks on tabular data. By visualising this relationship through a 2D density plot in Figure 21, we can discern patterns that illuminate the interplay between these two crucial factors. The graphs were divided into four distinct sections based on specific thresh- olds, enabling us to categorise different scenarios and gain a clearer understanding of their impact. These thresholds were determined using our Gaussian noise method, which selects the maximum ASR value (0.659) and the minimum IS value (0.181) from all adversarial examples generated by Gaussian noise. Effective and Imperceptible (High ASR, Low IS). The most desirable outcome for ad- versarial attacks occurs when examples successfully fool models while remaining nearly indis- tinguishable from original data. The density plot reveals that DeepFool consistently achieves this balance, with its highest density region falling in this quadrant. DeepFool’s iterative approach of finding minimal perturbations to cross decision boundaries clearly excels at preserving tabular data characteristics while maintaining high effectiveness. C&W also demonstrates strong performance in this quadrant for a portion of its ex- amples, though it shows a bimodal distribution across both imperceptible regions. This suggests that C&W can achieve the ideal balance in many cases but may sometimes sacri- fice effectiveness to maintain imperceptibility. 50 Figure 21: The 2D density plot shows the attack success rate (ASR) and imperceptibility score (IS) across Gaussian noise and five different attack methods. The plot is divided into four sectors based on the maximum ASR value (0.659) and the minimum IS value (0.181) observed among all adversarial examples generated by Gaussian noise. Gaussian noise is considered an ineffective and perceptible method for generating adversarial examples for tabular data. FGSM, PGD, and BIM are categorised as effective but perceptible methods. C&W attack has two high-density regions: one that is Effective and Imperceptible, and another that is Ineffective but Imperceptible. Most of DeepFool attack’s high-density regions fall into the Effective and Imperceptible sector. Effective but Perceptible (High ASR, High IS). This quadrant contains attacks that suc- cessfully mislead models but make noticeable modifications to the data. The density plots show that FGSM, PGD, and BIM consistently fall into this category, achieving high attack success rates at the cost of more significant data alterations. These ℓ∞-based attacks effec- tively fool models but often modify features in ways that could compromise data integrity or be detected in
https://arxiv.org/abs/2505.21027v1
quality control processes. Ineffective but Imperceptible (Low ASR, Low IS). Attacks in this quadrant make sub- tle changes that preserve data characteristics but fail to successfully mislead models. C&W 51 shows a significant density in this region, indicating that it sometimes generates examples that maintain excellent imperceptibility but cannot effectively fool the model. This high- lights C&W’s explicit optimisation for minimal perturbations, which can sometimes come at the expense of attack effectiveness. Ineffective and Perceptible (Low ASR, High IS). The least desirable outcome occurs when attacks make noticeable changes yet fail to mislead the model. Gaussian noise pre- dominantly falls in this category, confirming its poor performance as a baseline comparison. Its high-density region centres on moderate ASR values with high imperceptibility scores, demonstrating why random noise is considered both ineffective and easily perceptible. Overall Performance Comparison. The density plots provide clear evidence for ranking the overall performance of different attack methods: 1. DeepFool emerges as the most balanced approach, consistently generating examples that are both highly effective and imperceptible. Its iterative linearisation of decision boundaries enables precise identification of minimal perturbations needed to cross classification boundaries, resulting in subtle modifications that maintain data integrity while achieving high success rates. 2. C&W shows mixed results with two distinct behaviour patterns - one group achieving the ideal balance and another maintaining imperceptibility at the cost of effectiveness. 3. The ℓ∞-based attacks (FGSM, PGD, and BIM) prioritise effectiveness over impercep- tibility, making them suitable for scenarios where attack success is more important than maintaining data characteristics. 4. Gaussian noise serves as an appropriate baseline, demonstrating poor performance in both dimensions as expected. This analysis provides valuable guidance for selecting appropriate attack methods based on specific requirements for tabular data scenarios, highlighting the fundamental trade-off between effectiveness and imperceptibility in adversarial machine learning. 52 5. Discussion 5.1. Investigating the Inverse Relationship Between BIM Attack Budget and Success Rate As presented in Section 4.2, our RQ1 evaluation results across both mixed and numerical datasets reveal an intriguing and counterintuitive phenomenon. While increasing epsilon ϵ values generally leads to improved success rates for most attack methods, the BIM attack on the FTTransformer model shows a notable decline in success rates at higher perturba- tion budgets. This inverse relationship between attack budget and effectiveness contradicts conventional adversarial attack theory, where larger perturbation budgets typically enable more successful attacks. The plots in Figures 3 to 6 clearly demonstrate this unexpected pattern across multiple datasets, including Electricity, Compas, house 16H, BreastCancer, and MiniBooNE. In these cases, BIM attack success rates initially increase with epsilon values but then significantly decline at higher epsilon values, sometimes dropping dramatically. For example, on the BreastCancer dataset, the success rate drops from approximately 35% to nearly 0% at the highest epsilon value, while on MiniBooNE, it plummets from 100% to about 40%. Two primary factors may explain this counterintuitive behavior: 1.Gradient Saturation Effects: In BIM’s iterative approach, the step size ( α) and number of iterations ( T) play critical roles. When using default parameters ( α=0.2, T=10), the relatively large step size may cause
https://arxiv.org/abs/2505.21027v1
overshooting at higher epsilon values. This occurs because BIM computes gradients with respect to the input and takes steps in that direction. As epsilon increases, these steps can become too large, causing the attack to miss optimal adversarial regions and produce less effective perturbations. 2.Decision Boundary Characteristics: FTTransformer models may have complex decision boundaries with unique topological properties. At higher epsilon values, ad- versarial examples might cross these boundaries multiple times, potentially returning to regions where the model correctly classifies inputs. This threshold effect suggests that beyond certain perturbation magnitudes, the adversarial examples become per- ceptually more similar to their correct classes. 53 Figure 22: BIM attacks on FTTransformer stop dropping in attack success rate after adjusting step size ( α) hyperparameters. The orange lines, representing the adjusted BIM implementation, consistently maintain high attack success rates across all epsilon values, eliminating the dramatic drops observed with the default parameters. Our follow-up experiment demonstrates that adjusting BIM’s hyperparameters can mit- igate this issue. By reducing the step size ( α) from 0.2 to 0.05 and increasing iterations (T) from 10 to 20, in Figure 22, we observe that the modified BIM attack maintains high success rates even at larger epsilon values across all datasets. This confirms that the original decline was primarily due to optimisation challenges rather than fundamental limitations of the attack method. 54 This finding has important implications for adversarial attack research on tabular data: •FTTransformer models possess unique adversarial robustness characteristics that differ from other model architectures. •Attack hyperparameters require careful tuning based on both the model architecture and dataset characteristics. •When evaluating adversarial robustness, researchers should consider a range of attack configurations beyond default parameters to ensure comprehensive assessment. This investigation highlights the complex interplay between attack algorithms, model architectures, and dataset characteristics in the tabular domain. It also demonstrates the importance of parameter optimisation when deploying adversarial attacks, particularly for transformer-based models that may have more complex decision boundary topologies than traditional neural networks. 5.2. Exploring Design Strategies for Effective and Imperceptible Adversarial Attacks on Tab- ular Data In light of the results from analysing the relationship between attack success rate (ASR) and imperceptibility score (IS), achieving an optimal balance between effectiveness and imperceptibility is crucial in designing adversarial attack algorithms for tabular data. One notable observation is that ℓ∞attacks tend to generate highly effective adversarial examples, whereas ℓ2attacks are more adept at producing imperceptible examples. The key challenge lies in finding the equilibrium between these two aspects. To design effective and imperceptible adversarial attack algorithms for tabular data, several strategies can be explored: •Optimisation Techniques : Employing advanced optimisation techniques can en- hance the efficiency of adversarial attack algorithms. Techniques such as evolutionary algorithms, genetic algorithms, or gradient-based optimisation methods can be tai- lored to optimise both ASR and IS simultaneously, thereby facilitating the creation of more effective and imperceptible adversarial examples. 55 •Feature Engineering : Leveraging domain-specific knowledge and feature engineer- ing techniques can enhance the robustness and imperceptibility of adversarial attacks. By identifying and manipulating key features within the tabular data that are most susceptible
https://arxiv.org/abs/2505.21027v1
to manipulation, attackers can craft adversarial examples that achieve their objectives while minimising perceptible changes to the data. 5.3. Evaluating the Suitability of One-Hot Encoding for Adversarial Attacks on Tabular Data Adversarial attacks in machine learning have predominantly focused on image data, which are continuous and typically measured in the [0, 255] range. These attacks often involve adding small perturbations to the original samples, and the perturbations are eval- uated using distance metrics like the ℓp-norm. However, when it comes to tabular data, the challenge becomes more complex due to the presence of both numerical and categorical features. Categorical data can further be divided into nominal data, which are used for naming variables, and ordinal data, which possess an intrinsic order. Encoding these cat- egorical features into numerical values is crucial for applying adversarial attack algorithms effectively. Several existing studies have explored different strategies for handling categorical features for adversarial attacks on tabular data in Table 5. Ballet et al. [23] proposed dropping all categorical features and using the ℓp-norm and Weighted ℓp-norm as distance metrics. Mathov et al. [25] suggested using label encoding for categorical data, though they did not specify the distance metric employed. Chernikova and Oprea [26] and Cartella et al. [33] both used one-hot encoding and applied the ℓ2-norm as their distance metric. On the other Table 5: The encoding methods employed in recent papers on adversarial attacks targeting tabular data. Paper Year Encoding method Distance metric Ballet et al. [23] 2019 Drop all categorical features ℓp-norm & Weighted ℓp-norm Mathov et al. [25] 2021 Label encoding ℓ2-norm Chernikova and Oprea [26] 2022 One-hot encoding ℓ2-norm Cartella et al. [33] 2021 One-hot encoding ℓ2-norm Kireev et al. [27] 2022 Discrete continuous features Cost function Zhou et al. [37] 2022 Discrete continuous features ℓ1-norm 56 hand, Kireev et al. [27] and Zhou et al. [37] opted for discretising continuous features and used a cost function as a distance measure, without specifying a particular norm. In our work, we also adopted one-hot encoding for handling categorical features. How- ever, our evaluation revealed that one-hot encoding significantly impacts the sparsity of adversarial attacks on tabular data. One-hot encoding transforms categorical variables into a high-dimensional binary vector space, which can lead to an increase in the dimensionality of the dataset and consequently affect the efficiency and effectiveness of adversarial attacks. The high-dimensional space created by one-hot encoding may dilute the perturbations, mak- ing it harder to generate effective adversarial examples while maintaining the integrity of the data. This observation underscores the importance of choosing an appropriate encoding method for categorical features in adversarial attacks on tabular data. While one-hot encoding pre- serves the categorical nature of the data, its impact on sparsity and dimensionality needs careful consideration. Alternative encoding methods, such as label encoding for ordinal data or exploring more advanced techniques like embedding-based methods, might offer better trade-offs between preserving the data structure and maintaining the attack’s effectiveness. Future research should focus on evaluating these methods to develop robust adversarial at- tack strategies for tabular data that balance the trade-offs between
https://arxiv.org/abs/2505.21027v1
sparsity, dimensionality, and attack performance. Moreover, exploring alternative distance metrics presents a promising direction for future research. Traditional metrics like the Lp-norm may not be well-suited for the mixed data types often found in tabular datasets. Metrics such as Gower’s distance [38], which can handle mixed types of data (continuous, ordinal, and categorical), could provide a more accurate measure of similarity for tabular data. Additionally, other categorical feature similarity measures, such as those proposed by Cost and Salzberg [39] and Le and Ho [40], offer potential improvements by considering the unique characteristics of categorical data. By integrating these distance metrics into the design of adversarial attack algorithms, researchers can develop more effective and nuanced methods that are better tailored to the complexities of tabular data. 57 6. Conclusion In this paper, we conducted a comprehensive benchmark analysis of adversarial attacks on tabular data, focusing on both their effectiveness and imperceptibility. Using a diverse set of 11 datasets, encompassing both mixed and numerical data types, we evaluated the performance of five different adversarial attacks across four predictive models. Our find- ings reveal significant variations in attack effectiveness depending on the dataset and model combination. Furthermore, the study highlights the challenge of maintaining attack im- perceptibility, particularly in the context of tabular data, where subtle modifications can become perceptually noticeable. The results of our benchmark provide valuable insights into the strengths and limitations of existing adversarial attack methods when applied to tabular data. By analysing the trade- offs between attack success rates and their imperceptibility, we offer an understanding of how different attacks perform under varying conditions. This study lays the groundwork for future research aimed at developing more robust adversarial defences that can effectively counteract these attacks while preserving the integrity of tabular datasets. This research assumes that all features contribute equally to the predictive models, sim- ilar to the notion that each pixel holds equal importance in images. However, real-world tabular datasets often exhibit complex inter-dependencies among features. This observa- tion points to the need for future work to explore non-uniform adversarial attacks [41, 42]. Addressing these challenges will contribute to a more comprehensive understanding of the robustness and generalisation capabilities of predictive models in practical applications. Code Availability The implementation code, including data processing scripts and experimental pipelines, is openly available at https://github.com/ZhipengHe/TabAttackBench/ 58 CRediT authorship contribution statement Zhipeng He: Conceptualisation, Methodology, Software, Investigation, Writing – Orig- inal Draft, Visualisation. Chun Ouyang: Conceptualisation, Methodology, Investigation, Writing – Original Draft. Lijie Wen: Methodology, Investigation. Cong Liu: Supervision. Catarina Moreira: Methodology, Investigation, Supervision. Appendix A. Selected Attack Budgets ( ϵ) by ASR Table A.6: Best attack budget ( ϵ) settings for four models on different datasets from the evaluation. Datasets Model Guassian FGSM PGD BIM C&W DeepFool Adult LR 1 0.3 0.3 0.3 0.5 1 Adult MLP 0.5 1 1 1 0.01 1 Adult TabTrans 0.5 1 0.5 0.5 0.01 1 Adult FTTrans 1 0.3 0.5 0.5 0.5 1 Electricity LR 1 0.3 0.3 0.3 1 0.5 Electricity MLP 1 0.1 0.1 0.1 0.3 0.3 Electricity TabTrans 1 0.1 0.1 0.1 0.3 0.3
https://arxiv.org/abs/2505.21027v1
Electricity FTTrans 1 0.1 0.1 0.1 0.3 0.3 Compas LR 0.5 0.3 0.3 0.3 0.1 1 Compas MLP 0.3 0.5 0.5 0.5 1 1 Compas TabTrans 1 0.3 0.3 0.3 0.1 0.5 Compas FTTrans 0.07 0.3 1 0.3 0.01 0.5 Higgs LR 1 0.07 0.07 0.07 0.3 0.1 Higgs MLP 1 0.07 0.07 0.07 0.3 0.1 Higgs TabTrans 1 0.07 0.07 0.07 0.3 0.1 Higgs FTTrans 1 0.07 0.07 0.07 0.3 0.1 house 16H LR 0.01 0.3 0.3 0.3 1 1 house 16H MLP 1 0.1 0.1 0.1 0.5 0.3 59 house 16H TabTrans 1 0.3 0.3 0.3 0.5 0.5 house 16H FTTrans 1 0.1 0.1 0.1 1 0.5 jm1 LR 0.01 0.5 0.5 0.5 0.01 1 jm1 MLP 1 0.1 0.1 0.1 0.5 0.5 jm1 TabTrans 1 0.3 0.3 0.3 0.3 0.5 jm1 FTTrans 1 0.3 0.3 0.3 0.5 0.5 BreastCancer LR 1 0.3 0.3 0.3 1 1 BreastCancer MLP 1 0.3 0.3 0.3 1 1 BreastCancer TabTrans 1 0.3 0.5 0.5 1 1 BreastCancer FTTrans 1 0.3 1 0.1 1 0.5 WineQuality-White LR 1 0.3 0.5 0.3 1 1 WineQuality-White MLP 1 0.3 0.3 0.3 0.5 0.5 WineQuality-White TabTrans 1 0.3 0.3 0.3 0.5 0.5 WineQuality-White FTTrans 1 0.3 0.5 0.3 1 1 WineQuality-Red LR 1 0.3 0.3 0.3 1 0.5 WineQuality-Red MLP 1 0.3 0.3 0.3 0.5 0.5 WineQuality-Red TabTrans 1 0.3 0.5 0.3 0.5 0.5 WineQuality-Red FTTrans 1 0.3 0.3 0.3 1 0.5 phoneme LR 1 0.5 1 0.5 0.01 1 phoneme MLP 1 0.3 0.3 0.3 1 1 phoneme TabTrans 1 1 1 0.3 1 1 phoneme FTTrans 1 1 0.5 0.3 0.5 1 MiniBooNE LR 1 0.1 0.3 0.1 0.1 0.3 MiniBooNE MLP 1 0.1 0.1 0.1 0.1 0.3 MiniBooNE TabTrans 1 0.07 0.07 0.07 0.1 0.3 MiniBooNE FTTrans 1 0.1 0.3 0.1 0.1 0.5 60 References [1] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, R. Fergus, Intriguing properties of neural networks, in: 2nd International Conference on Learning Representations, ICLR 2014, 2014. [2] N. Akhtar, A. Mian, Threat of adversarial attacks on deep learning in computer vision: A survey, Ieee Access 6 (2018) 14410–14430. [3] W. E. Zhang, Q. Z. Sheng, A. Alhazmi, C. Li, Adversarial attacks on deep-learning models in natural language processing: A survey, ACM Transactions on Intelligent Systems and Technology (TIST) 11 (2020) 1–41. [4] N. Carlini, D. Wagner, Audio adversarial examples: Targeted attacks on speech-to-text, in: 2018 IEEE security and privacy workshops (SPW), IEEE, 2018, pp. 1–7. [5] F. Croce, M. Andriushchenko, V. Sehwag, E. Debenedetti, N. Flammarion, M. Chiang, P. Mit- tal, M. Hein, Robustbench: a standardized adversarial robustness benchmark, arXiv preprint arXiv:2010.09670 (2020). [6] S. Eger, Y. Benz, From hero to z´ eroe: A benchmark of low-level adversarial attacks, in: Proceedings of the 1st conference of the Asia-Pacific chapter of the association for computational linguistics and the 10th international joint conference on natural language processing, 2020, pp. 786–803. [7] Z. Jin, J. Zhang, Z. Zhu, H. Chen, Short: Benchmarking transferable adversarial attacks, in: Network and Distributed System Security (NDSS) Symposium 2024, 2024. [8] Y.
https://arxiv.org/abs/2505.21027v1
Dong, Q.-A. Fu, X. Yang, T. Pang, H. Su, Z. Xiao, J. Zhu, Benchmarking adversarial robustness on image classification, in: proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 321–331. [9] M. Zheng, X. Yan, Z. Zhu, H. Chen, B. Wu, Blackboxbench: A comprehensive benchmark of black-box adversarial attacks, arXiv preprint arXiv:2312.16979 (2023). [10] N. Hingun, C. Sitawarin, J. Li, D. Wagner, Reap: a large-scale realistic adversarial patch benchmark, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 4640–4651. [11] A. E. Cin` a, J. Rony, M. Pintor, L. Demetrio, A. Demontis, B. Biggio, I. B. Ayed, F. Roli, Attackbench: Evaluating gradient-based attacks for adversarial examples, arXiv preprint arXiv:2404.19460 (2024). [12] Q. Zheng, X. Zou, Y. Dong, Y. Cen, D. Yin, J. Xu, Y. Yang, J. Tang, Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning, arXiv preprint arXiv:2111.04314 (2021). [13] L. Li, J. Lei, Z. Gan, J. Liu, Adversarial vqa: A new benchmark for evaluating the robustness of vqa models, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2042–2051. 61 [14] S. A. Siddiqui, A. Dengel, S. Ahmed, Benchmarking adversarial attacks and defenses for time-series data, in: International Conference on Neural Information Processing, Springer, 2020, pp. 544–554. [15] I. J. Goodfellow, J. Shlens, C. Szegedy, Explaining and harnessing adversarial examples, in: 3rd International Conference on Learning Representations, ICLR 2015, 2015. [16] Z. He, C. Ouyang, L. Alzubaidi, A. Barros, C. Moreira, Investigating imperceptibility of adversarial attacks on tabular data: An empirical analysis, Intelligent Systems with Applications 25 (2025) 200461. [17] N. Papernot, P. D. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, A. Swami, The limitations of deep learning in adversarial settings, in: IEEE European Symposium on Security and Privacy, EuroS&P 2016, IEEE, 2016, pp. 372–387. [18] F. Assion, P. Schlicht, F. Greßner, W. G¨ unther, F. H¨ uger, N. M. Schmidt, U. Rasheed, The attack generator: A systematic approach towards constructing adversarial attacks, in: IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, Long Beach, CA, USA, June 16-20, 2019, Computer Vision Foundation / IEEE, 2019, pp. 1370–1379. [19] A. Kurakin, I. J. Goodfellow, S. Bengio, Adversarial examples in the physical world, in: 5th Interna- tional Conference on Learning Representations, ICLR 2017, 2017. [20] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, Towards deep learning models resistant to adversarial attacks, arXiv:1706.06083 (2017). [21] N. Carlini, D. A. Wagner, Towards evaluating the robustness of neural networks, in: 2017 IEEE Symposium on Security and Privacy, SP 2017, IEEE Computer Society, 2017, pp. 39–57. [22] S. Moosavi-Dezfooli, A. Fawzi, P. Frossard, Deepfool: A simple and accurate method to fool deep neural networks, in: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, IEEE Computer Society, 2016, pp. 2574–2582. [23] V. Ballet, X. Renard, J. Aigrain, T. Laugel, P. Frossard, M. Detyniecki, Imperceptible adversarial attacks on tabular data, arXiv 1911.03274 (2019). [24] T. Simonetto, S. Dyrmishi, S. Ghamizi, M. Cordy, Y. L. Traon, A unified framework for adversarial attack and defense in constrained feature
https://arxiv.org/abs/2505.21027v1
space, arXiv:2112.01156 (2021). [25] Y. Mathov, E. Levy, Z. Katzir, A. Shabtai, Y. Elovici, Not all datasets are born equal: On heterogeneous tabular data and adversarial examples, Knowl. Based Syst. 242 (2022) 108377. [26] A. Chernikova, A. Oprea, FENCE: feasible evasion attacks on neural networks in constrained environ- ments, ACM Trans. Priv. Secur. 25 (2022) 34:1–34:34. [27] K. Kireev, B. Kulynych, C. Troncoso, Adversarial robustness for tabular data through cost and utility awareness, arXiv preprint arXiv:2208.13058 (2022). [28] J. Zhou, N. Zaidi, Y. Zhang, P. Montague, J. Kim, G. Li, Leveraging generative models for combating adversarial attacks on tabular datasets, in: Pacific-Asia Conference on Knowledge Discovery and Data 62 Mining, Springer, 2023, pp. 147–158. [29] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, C.-J. Hsieh, Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models, in: Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, pp. 15–26. [30] W. Brendel, J. Rauber, M. Bethge, Decision-based adversarial attacks: Reliable attacks against black- box machine learning models, arXiv preprint arXiv:1712.04248 (2017). [31] J. Chen, M. I. Jordan, M. J. Wainwright, Hopskipjumpattack: A query-efficient decision-based attack, in: 2020 ieee symposium on security and privacy (sp), IEEE, 2020, pp. 1277–1294. [32] G. Gressel, N. Hegde, A. Sreekumar, R. Radhakrishnan, K. Harikumar, K. Achuthan, et al., Feature importance guided attack: A model agnostic adversarial attack, arXiv preprint arXiv:2106.14815 (2021). [33] F. Cartella, O. Anuncia¸ c˜ ao, Y. Funabiki, D. Yamaguchi, T. Akishita, O. Elshocht, Adversarial attacks for tabular data: Application to fraud detection and imbalanced data, in: Workshop on Artificial Intelligence Safety 2021, 2021. [34] J. Thiyagalingam, M. Shankar, G. Fox, T. Hey, Scientific machine learning benchmarks, Nature Reviews Physics 4 (2022) 413–420. [35] X. Huang, A. Khetan, M. Cvitkovic, Z. Karnin, Tabtransformer: Tabular data modeling using contex- tual embeddings, arXiv preprint arXiv:2012.06678 (2020). [36] Y. Gorishniy, I. Rubachev, V. Khrulkov, A. Babenko, Revisiting deep learning models for tabular data, Advances in Neural Information Processing Systems 34 (2021) 18932–18943. [37] J. Zhou, N. A. Zaidi, Y. Zhang, G. Li, Discretization inspired defence algorithm against adversarial attacks on tabular data, in: Advances in Knowledge Discovery and Data Mining - 26th Pacific-Asia Conference, PAKDD 2022, volume 13281 of Lecture Notes in Computer Science , 2022, pp. 367–379. [38] J. C. Gower, A general coefficient of similarity and some of its properties, Biometrics (1971) 857–871. [39] S. Cost, S. Salzberg, A weighted nearest neighbor algorithm for learning with symbolic features, Machine learning 10 (1993) 57–78. [40] S. Q. Le, T. B. Ho, An association-based dissimilarity measure for categorical data, Pattern Recognition Letters 26 (2005) 2549–2557. [41] E. Erdemir, J. Bickford, L. Melis, S. Aydore, Adversarial robustness with non-uniform perturbations, Advances in Neural Information Processing Systems 34 (2021) 19147–19159. [42] J. Nandy, J. Chauhan, R. Saket, A. Raghuveer, Non-uniform adversarial perturbations for discrete tabular datasets, in: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023, pp. 1887–1896. 63
https://arxiv.org/abs/2505.21027v1
arXiv:2505.21032v1 [cs.CV] 27 May 2025FeatInv: Spatially resolved mapping from feature space to input space using conditional diffusion models Nils Neukirch Division AI4Health Carl von Ossietzky Universität Oldenburg [email protected] Vielhaben Explainable Artificial Intelligence Group Fraunhofer Heinrich-Hertz-Institute [email protected] Nils Strodthoff Division AI4Health Carl von Ossietzky Universität Oldenburg [email protected] Abstract Internal representations are crucial for understanding deep neural networks, such as their properties and reasoning patterns, but remain difficult to interpret. While mapping from feature space to input space aids in interpreting the former, existing approaches often rely on crude approximations. We propose using a conditional dif- fusion model – a pretrained high-fidelity diffusion model conditioned on spatially resolved feature maps – to learn such a mapping in a probabilistic manner. We demonstrate the feasibility of this approach across various pretrained image classi- fiers from CNNs to ViTs, showing excellent reconstruction capabilities. Through qualitative comparisons and robustness analysis, we validate our method and show- case possible applications, such as the visualization of concept steering in input space or investigations of the composite nature of the feature space. This approach has broad potential for improving feature space understanding in computer vision models. OriginalResNet50ConvNeXtSwinV2FeatInv Figure 1: FeatInv learns a probabilistic mapping from feature space to input space and thereby provides a visualization of how a sample is perceived by the respective model. The goal is to identify input samples within the set of natural images whose feature representations align most closely with the original feature representation of a given model. In this figure, we visualize reconstructed samples obtained by conditioning on the feature maps of the penultimate layer from ResNet50, ConvNeXt and SwinV2 models. Preprint. Under review. 1 Introduction The feature space is vital for understanding neural network decision processes as it offers insights into the internal representations formed by these models as they process input data. While it serves as the foundation for many modern explainability approaches [ 21,3], its importance extends beyond interpretability. The feature space provides a rich resource for investigating fundamental properties of deep neural networks, including their robustness against perturbations, invariance characteristics, and symmetry properties [ 4]. By analyzing the geometry and topology of these learned representations, researchers can gain insights into model generalization capabilities, failure modes, and the emergence of higher-order patterns in the data. This perspective enables advancements in theoretical understanding of neural networks while informing practical improvements in architecture design and training methodologies. An important challenge in examining the feature space is establishing a connection back to the input domain, especially for classification models that map to labels rather than the same domain as the input. One aspect of this challenge involves identifying which part of the input a particular region or unit in feature space is sensitive to. GradCAM [ 24] pioneered this by linearly upsampling a region of interest in feature space to the input size. However, linear upsampling imposes a rather strong implicit assumption. As an alternative, one might consider the entire receptive field of a feature map location, yet in deep architectures these fields tend to be broad and less informative. The more intricate second
https://arxiv.org/abs/2505.21032v1
aspect of this challenge is to derive a mapping from the entirety of the feature space representation back to the input domain – beyond mere localization. Recent works proposed to leverage conditional generative models to learn such a mapping by conditioning them on feature maps [ 4,6,23]. However, these approaches either build on pooled feature maps (discarding finegrained spatial details of the feature map), only provide deterministic mappings (overlooking the inherent uncertainty of this ill-posed problem), or do not utilize state-of-the-art generative models. To the best of our knowledge, there is no probabilistic model that provides high-fidelity input samples when conditioned on a spatially resolved feature map – thereby integrating both aspects of the challenge described above. We aim to close this gap with this submission. More specifically, in this work we put forward the following contributions: 1.We demonstrate the feasibility of learning high-fidelity mappings from feature space to input space using a conditional diffusion model of the ControlNet-flavor, as exemplified in Fig. 1. We investigate this for different computer vision models, ranging from CNNs to ViTs. 2.We provide quantitative evidence that generated samples align with the feature maps of the original samples and that the samples represent high-fidelity natural images, see Tab. 1. and carry out a qualitative model comparison, see Fig. 3 as well as a robustness analysis, see Tab. 2. 3.We provide a specific use-cases for the application of the proposed methodology to visualize concept-steering in input space, see Fig. 4, as well as to provide insights into the composite nature of the feature space, see Fig. 5. 2 Methods Approach In this work, we propose a method called FeatInv to approximate an inverse mapping from a model’s feature space to input space. Our method conditions a pretrained stable diffusion model on a spatially resolved feature map extracted from a pretrained CNN/ViT model of our choice. As described in detail in the next paragraph, the feature maps are provided as conditional information along with an unspecific text prompt (“a high-quality, detailed, and professional image”) to a conditional diffusion model of the ControlNet [ 29] flavor. Importantly, rather than achieving a precise reconstruction of the original sample in input space, our goal is to infer high-fidelity, synthetic images whose feature representations align with those of the original image when passed through a pretrained CNN/ViT model. Architecture and training procedure We use a ControlNet [ 29] architecture, building on a pretrained diffusion models, in our case a MiniSD [ 20] model operating at an input resolution of 256×256. The ControlNet is a popular approach to condition a pretrained diffusion model on dense inputs such as segmentation maps or depth maps. It leverages a pretrained (text-conditional) diffusion 2 Visualization of model represention in input space trainable neural network blockFeatInv (ours)Model (CNN/ ViT) Classification headStable Dif fusionneural network block (locked)ControlNetzero convolutionzero convolution bilinear upsamplingconvolutionSiLUconvolutionSiLUText prompt: “a high-quality, detailed and professional image”Conditional Encoder Input ++......Internal representation of dif fusion model Figure 2: Schematic overview of the FeatInv approach. Left: Given a spatially resolved feature mapcfof some pretrained model, we aim to infer an input
https://arxiv.org/abs/2505.21032v1
x′within the set of natural images, whose feature representation aligns as closely as possible with cf, i.e., to learn a probabilistic mapping from feature space to input space. Previous work consider spatially pooled feature maps, whereas this work conditions on spatially resolved feature maps. Middle: We leverage a pretrained diffusion model, which gets conditioned on cfby means of a ControlNet architecture, which parametrizes an additive modification on top of the frozen diffusion model. Right top: The ControlNet adds trainable copies of blocks in the stable diffusion model, which are conditioned on the conditional input and added to the output of the original module, which is kept frozen. Right bottom: The feature map cfis processed through bilinear upsampling and a shallow convolutional encoder to serve as conditional input for the ControlNet. model, whose weights are kept frozen. The trainable part of the ControlNet model mimics the internal structure of the pretrained diffusion model, with additional layers introduced to incorporate conditioning inputs. These conditional inputs are processed by a dedicated encoder and inserted into the corresponding computational blocks, where their outputs are added to those of the original diffusion model. Convolutional layer that are initialized to zero ensure that the optimization of the ControlNet model starts from the pretrained diffusion model. Conditional input encoder An import design choice is the conditional input encoder, which maps the feature map (with shape Hf×Wf×Cf, where Hf,Wf,Cfcorrespond to the height, width and channels of the feature map, respectively) to the diffusion model’s internal representation space (with shape Hd×Wd×Cd). As a definite example for 224×224input resolution, for the output of ResNet50’s final convolutional block with Hf=Wf= 7,Cf= 2048 we learn a mapping to the diffusion model’s internal representation space with Hd=Wd= 32 andCd= 320 . To this end, we first use bilinear upsampling to reach the target resolution. Then, we allow for a shallow CNN to learn a suitable mapping from the model’s representation space to the diffusion model’s representation space. Pooled vs. unpooled To demonstrate superiority over prior work [ 4], we also consider the case of pooled feature representations obtained from average-pooling spatial tokens/feature maps. In order to process them using the same pipeline as for conditioning on spatially resolved feature maps, we copy the Cd-dimensional input vector along HdandWdtimes to reach an input tensor with shape Hd×Wd×Cdas before. 3 Training The ControlNet is trained using the same noise prediction objective as the original diffusion model [ 12]. Control signals are injected at multiple layers throughout the network, rather than being restricted to the middle layers, allowing them to influence the denoising process at various stages. Training was conducted on the ImageNet training set with a batch size of 8 and a learning rate of 1e-5 using an AdamW optimizer with the stable diffusion model locked. The ControlNet was trained on ImageNet for three epochs over approximately 45 to 60 hours (depending on backbone) of compute time on two NVIDIA L40 GPUs. During the course of this project, about five times more models were trained until the described setup was reached. Full pipeline We work with the
https://arxiv.org/abs/2505.21032v1
original input resolution of the respective pretrained models, which varies between 224×224and384×384for the considered models, see the supplementary material for a detailed breakdown. Even though the approach allows conditioning on any feature map, we restrict ourselves to the last spatially resolved feature map, i.e., directly before the pooling layer, and learn mappings to MiniSD’s internal feature space. The MiniSD model always returns an image with resolution 256×256, which we upsample/downsample to the model’s expect input resolution via bilinear upsampling/downsampling. The full generation pipeline is visualized in Fig. 2. 3 Related Work Conditional diffusion models Achieving spatially controllable image generation while leveraging a pretrained diffusion model has been a very active area of research recently, see [ 30] for a recent review. Applications include the conditional generation of images from depth maps, normal maps or canny maps. Popular approaches in this direction include ControlNet [29] or GLIGEN [14]. The mapping from feature maps as conditional input is structurally similar to the mentioned cases of spatially controllable generation. However, there is a key distinction. In the previously mentioned cases, the conditional information typically matches the resolution of the input image. This often necessitates downsampling to reach the diffusion model’s internal representation space. In contrast, commonly used classification models (including CNNs and vision transformers) leverage feature maps with a reduced spatial resolution. Consequently, the spatial resolution of the conditional information is typically lower dimensional than the diffusion model’s internal representation space. This difference necessitates an upsampling operation before conditioning on feature maps. Feature visualization The idea to reveal structures in feature space to understand what a neural network has learned is an old one. Approaches range from identifying input structures or samples that maximize the activation of certain feature neurons [ 7,19] to approximate inversion of the mapping from input to features space [ 28]. Our approach clearly stands in the tradition of the latter approach. Previous work has attempted to learn a deterministic mapping that “inverts” AlexNet feature maps [ 6]. This approach was recently extended to invert vision transformer representations [22]. In contrast, FeatInv learns a probabilistic mapping using state-of-the-art diffusion models and investigates state-of-the-art model architectures. Other approaches tackle the problem using invertible neural networks to connect V AE latent representations to input space [ 23] and/or disentangle these representations using concept supervision [ 8]. In contrast, FeatInv does not rely on a particular encoder/decoder structure but can use any pretrained neural network as encoder. The closest prior work to our approach is [ 4], which also uses a diffusion model to learn a mapping from feature space to input space. However, it uses pooled representations as input, i.e. neglects the spatial resolution of the feature map. We argue that pooled representations are too coarse for many applications as they disregard the finegrained spatial structure of the feature space. Representation surgery Finally, related feature inversion approaches have also been explored beyond computer vision, for example in natural language processing [ 18]. Here, the ability to invert latent representations is seen as an essential component for representation surgery approaches [ 1].FeatInv enables similar
https://arxiv.org/abs/2505.21032v1
approaches for computer vision models. 4 Results We investigate three models ResNet50 [ 11] (original torchvision weights), ConvNeXt [ 16] and SwinV2 [ 15]1all of which have been pretrained/finetuned on ImageNet1k. ConvNeXt and SwinV2 represent modern convolution-based and vision-transformer-based architectures, identified as strong 1timm model weights: convnext_base.fb_in22k_ft_in1k, swinv2_base_window12to24_192to384_22kft1k. 4 Table 1: Reconstruction quality and image quality of the individual models : For the three considered backbones, we indicate three performance metrics to assess the reconstruction quality: Cosine similarity in feature space (cosine-sim), calculated by taking the median of the cosine similarity of all superpixels, top5(1) matches using the top1 prediction of the original sample as ground truth (top5(1) match) and FID-scores (FID) to assess the quality of the generated samples. We consider generative models conditioned on unpooled feature maps (rows 1-3) and models conditioned on pooled feature maps (rows 4-6). The results indicate that the proposed approach produces high-fidelity input samples as perceived by the respective models. Model cosine-sim top5(1) match FIDunpooledResNet50 0.40 87% (63%) 12.32 ConvNeXt 0.61 94% (76%) 7.14 SwinV2 0.57 95% (81%) 11.56pooledResNet50 0.09 47% (22%) 26.37 ConvNeXt 0.18 42% (18%) 31.01 SwinV2 0.14 47% (23%) 36.21 backbones in [ 10]. We include ResNet50 due to its widespread adoption. For each model, we train a conditional diffusion model conditioned on the representations of the last hidden layer before the final pooling layer to reconstruct the original input samples. Below, we report on quantitative and qualitative aspects of our findings. 4.1 Quantitative and qualitative comparison Experimental setup For each ImageNet class, we reconstructed 10 validation set samples with FeatInv, resulting in 10000 reconstructed samples. We adjust the diffusion model’s guidance scale to optimize the FID score between original and reconstructed samples on the validation set, resulting in a guidance scale of 8(1.75) for (un)pooled conditional input, see the supplementary material for details. We reconstruct with a sample step size of 50. For each model this took roughly 12 hours on a single NVIDIA L40 GPU. The generated samples are assessed according to two complementary quality criteria, reconstruction quality and sample quality: 1.Reconstruction quality The encoded generated image should end up close to the feature representation of the original samples, which can be understood as a reconstruction objective that is implemented implicitly by conditioning the diffusion model on a chosen feature map. (1a) The most obvious metric is cosine similarity between both feature maps. However, not all parts of the feature space will be equally important for the downstream classifier. (1b) Most reliable measure is the classifier output itself. Focusing on top-predictions, one can also compare top-k predictions to the top prediction for the original sample. More general alignment measures between generated input and original feature representation are not helpful in this context, as we require a precise reconstruction of the original feature space for the downstream classifier above the layer under consideration. 2.Sample quality We aim to generate samples within the set of high-fidelity natural images. In our case, this objective is is implemented through the use of a pretrained diffusion model. Apart from qualitative assessments in the following sections,
https://arxiv.org/abs/2505.21032v1
we rely on FID-scores as established measures to assess sample quality. Reconstruction quality Comparing identical models conditioned either on pooled or unpooled feature maps, not surprisingly unpooled models show a significantly higher reconstruction quality. Samples generated by models conditioned on unpooled feature maps show a very good alignment with the feature maps of the original samples (cosine similarities above 0.57 and top5 matching predictions of 94% or higher for the two modern vision backbones). Samples conditioned on pooled feature maps show some alignment but fail to accurately reconstruct the respective feature map and are therefore unreliable for investigations of structural properties of models. These findings support 5 OriginalResNet50ConvNeXtSwinV2SwinV2 pooled 0.630.600.150.870.740.290.900.760.150.810.720.170.200.750.860.670.720.140.750.460.190.19 0.86 0.90 0.380.590.280.340.390.290.380.48Figure 3: Qualitative comparison of reconstructed samples for the ResNet50, ConvNeXt, SwinV2 and SwinV2 pooled models. The cosine similarity of the original feature map and that of the reconstruction is noted at the bottom edge of the images. The qualitative comparison confirms the insights from the quantitative analysis in Tab. 1. The two modern vision backbones, ConvNeXt and SwinV2, show reconstructions that resemble the original very closely, not only in terms of semantic content and spatial alignment but also in terms of color schemes and finegrained details. Semantic content and composition also mostly matches in case of the ResNet50, but not even the semantic content seems to be captured when using pooled representations (SwinV2 pooled as an example). the hypothesis that the approach yield feature space reconstructions that closely match the original feature representations. Sample quality The corresponding class-dependent diffusion model achieves an FID score around 29, which is typically considered as good quality. The models conditioned on pooled representations still show acceptable FID scores between 26 and 36. Interestingly, models conditioned on unpooled representations show a significant increase in image quality with FID scores between 7 and 12. These results support the statement that the created samples were sampled from the space of high-fidelity natural images. Backbone comparison Within each category (pooled vs. unpooled), there is a gap between the two most recent model architectures ConvNeXt and SwinV2, notwithstanding the architectural differences (CNN vs. Vision transformer) between the two, in comparison to the older Resnet50 models. The former achieve cosine similarities of .6 or higher and top5 matches of 95% or higher in the unpooled category. This suggests that there is a qualitative difference between the representations of ResNet50-representations and representations of more modern image backbones. Qualitative comparison In Fig. 3, we present a qualitative comparison based on randomly selected samples. The visual impressions of ConvNeXt and SwinV2 reconstructions are similar to each other while also being close to the input sample despite the fact that they were trained on high-level semantic feature maps, i.e., without a reconstruction objective in input space. The ResNet50 reconstructions seem in many cases an interpretation of the sample’s semantic content (see e.g. 2. toucan or 5. file), albeit with the correct spatial composition, while matching specific color composition and textures much less accurately than ConvNeXt and SwinV2. We primarily attribute the differences between ResNet and ConvNeXt/SwinV2 to the nature of the feature spaces
https://arxiv.org/abs/2505.21032v1
themselves, stressing qualitative difference between modern architectures such as ConvNeXt and SwinV2 and older model architectures such as ResNet50, which are much more pronounced than the differences between 6 Table 2: Cross-model evaluation : Percentage of matching of the actual predictions (top5/top1) and the predictions based on the reconstructions for different models. The FeatInv models based on the ResNet50, ConvNeXt and SwinV2 features were used for the reconstruction and evaluated by the same three models. evaluated by conditioned on ResNet50 ConvNeXt SwinV2 ResNet50 87% / 63% 85% / 60% 87% / 65% ConvNeXt 92% / 72% 94% / 76% 95% / 80% SwinV2 94% / 77% 95% / 79% 95% / 81% different model architectures such as ViTs and CNNs. The samples obtained from conditioning on pooled feature representations often seem to capture overall semantic content of the image correctly (file, space shuttle, traffic light), but fail to reflect the details of the composition of the image. Robustness evaluation To assess the robustness of the presented results, we carry out cross-model comparisons where we measure model performance based on samples generated by conditioning on the feature map extract from a different model. The results for this experiment are compiled in Tab. 2. It turns out that all three sets of samples (conditioned on features generated by the three different backbones) transfer quite remarkably to other models. In the supplementary material, we also present results supporting the robustness of our approach when applied to out-of-distribution (OOD) samples. 4.2 Application: FeatInv-Viz – Visualizing concept steering in input space Concept steering in input space In generative NLP, steering is sometimes used to verify concept interpretations by reducing or magnifying concepts in the model activations and observing how this changes the generated output text, as famously demonstrated at the example of the Golden Gate concept [ 5] in Claude 3 Sonnet. This approach is not directly applicable to vision classifiers. However, with our method of inverting model representations from feature to input space, we can observe the effect of concept steering within hidden model activations in the input representation space instead of the output. This enables a novel method for concept visualization, with benefits over existing approaches (see below). Concept definition Concepts are typically defined as structures in feature space such as individual neurons, single directions or multi-dimensional subspaces. Many concept-based XAI methods define a way to decompose a feature vector into concepts from a dictionary/concept bank [ 9]. In this work, we use concepts from multi-dimensional concept discovery (MCD) [ 25], which defines concepts as linear subspaces in feature space. Nevertheless, our approach is applicable to any concept discovery method. Concept visualization through attenuated feature maps A common challenge for unsupervised concept discovery methods is inferring the meaning of discovered concepts. To address this, we steer a concept in feature space and observe the effect in input space. Specifically, we attenuate coefficients for the concept under consideration to 25% (see the Supplementary Material for a details). Then, we use FeatInv to map the original and the modified feature map to input space using identical random
https://arxiv.org/abs/2505.21032v1
seeds for the diffusion process. By comparing the resulting images, we gain insights into how the concept is expressed in input space. We call this method FeatInv-Viz and present it in Algorithm 1. 7 Algorithm 1: FeatInv-Viz : Visualization of concept steering in input space Input: Model m, concept decomposition ϕ=P iϕi, concept with id c Output: Visualization of concept cin input space Notation: x∈R3×H×Wwhere x(j)refers to color channels with j∈ {R, G, B } ϕ′←P i̸=cϕi+ 0.25·ϕc; // Attenuated feature map fori= 1tondo si←RandomSeed () xi←FeatInv (ϕ, seed =si); // Original reconstruction x′ i←FeatInv (ϕ′, seed =si); // Attenuated reconstruction ∆i←qP j∈{R,G,B }(x(j) i−x′(j) i)2; // Euclidean distance return median {∆i}n i=1; // Median along sample axis Exemplary results Fig. 4 shows exemplary concept steering visualizations for four samples from the Indigo Bunting class. Here, we decomposed ConvNeXt’s feature space into three linear concept subspaces. FeatInv-Viz provides a visualization of these concepts in input space. The method provides a very finegrained visualization of which specific regions in input space change upon steering each concept in feature space. Benefits We emphasize that FeatInv-Viz extends commonly used concept activation maps in two ways: First, it provides a finegrained visualization rather than a coarse upscaling [ 2,25] of a lower-resolution feature map. Second, it goes beyond merely verifying alignment with a predefined concepts [ 2], by providing counterfactual information from targeted feature-map manipulations. OriginalConcept 1Concept 2Concept 3 Figure 4: FeatInv-Viz visualization of three concepts identified within ConvNeXt’s feature space of the Indigo Bunting class, which can be associated with sky/background, bird head/breast and branches/leaves. For the visualization we normalize the respective outputs of Algorithm 1 and threshold it below 0.33 as a binary mask to indicate unaffected regions of the image. 4.3 Application: Investigating the composite nature of the feature space In NLP, well-known examples of feature-space arithmetic – e.g. king−man+woman =queen [17] – have shaped our understanding of embedding geometries. FeatInv offers insights into the composite nature of the feature space in vision models by conditioning on feature maps from two samples. In particular, we investigate the effect of convex linear superpositions of two feature maps. To this end we linearly interpolate between the feature representations of two input samples and visualize 8 reconstructions for different weighted combinations, as shown in Fig. 5. We also indicate the cosine similarity between the reconstruction and the weighted feature map, which is highest for the original feature maps and typically reaches its lowest value for the equally weighted interpolated feature map. This can be seen as an indication that the weighted average of two feature maps is in general not a well-defined operation. Nevertheless, foreground objects from one image and background from a second, seem to be reasonably combined through linear superposition (see e.g. bird, landscape). Original100% / 0%75% / 25%60% / 40%50% / 50%40% / 60%25% / 75%0% / 100%Original0.59 0.48 0.38 0.39 0.47 0.48 0.49 0.90 0.80 0.73 0.60 0.48 0.57 0.62 0.59 0.54 0.46 0.45 0.57 0.72 0.86 0.72 0.59 0.88 0.66 0.60 0.47 0.27 0.09 0.59 0.58 0.81 0.54 0.50 0.40 0.45
https://arxiv.org/abs/2505.21032v1
0.40 0.36 0.81 0.73 0.71 0.69 0.80 0.710.65 0.42 0.42 0.61 0.69 Figure 5: Reconstructions from weighted combinations of two ConvNeXt feature maps . The cosine similarity between the weighted feature map and that of the reconstruction is noted at the bottom edge of the images. 4.4 Limitations and future work Our work is subject to different limitations, which provide directions for future investigations: First, the present work focuses exclusively on the domain of natural images. It would be very instructive to extend the approach to other domains, such as medical imaging. Second, the proposed approach building on the ControlNet method, builds on a pretrained diffusion model, which might not be readily available in any application contexts. Third, every model and layer choice requires training a dedicated FeatInv model, which represents a computation hurdle. First experiments and the results in Tab. 2 indicate that finetuning could be beneficial to alleviate this issue. Finally, both application scenarios rely on modifications of the feature space. In order to obtain reliable results, it would be instrumental to introduce measures to detect input samples, i.e., feature maps that are outside the scope of the model. 5 Summary and Discussion In this work, we address the problem of obtaining insights into the structure of a given model’s feature map by means of a learned probabilistic mapping from feature space to input space, implemented as a conditional diffusion model. We demonstrate the feasibility of training such a model in a ControlNet-style achieving very accurate and robust reconstruction results across different model architectures. We present two possible applications both of which relate to gaining inside into manip- ulated feature maps. However, we believe that the proposed approach could be widely applicable to further applications. We envision a potentially positive societal impact through improved model understanding, along the lines of the concept steering use case. The source code underlying our investigations is available at https://github.com/AI4HealthUOL/FeatInv . 9 References [1]M. Avitan, R. Cotterell, Y . Goldberg, and S. Ravfogel. A practical method for generating string counterfactuals. arXiv preprint 2402.11355 , 2025. URL https://arxiv.org/abs/2402. 11355 . [2]D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying interpretability of deep visual representations. In IEEE Conference on Computer Vision and Pattern Recognition , pages 6541–6549, 2017. [3]L. Bereska and S. Gavves. Mechanistic interpretability for AI safety - a review. Transactions on Machine Learning Research , 2024. ISSN 2835-8856. URL https://openreview.net/ forum?id=ePUVetPKu6 . Survey Certification, Expert Certification. [4]F. Bordes, R. Balestriero, and P. Vincent. High fidelity visualization of what your self-supervised representation knows about. Transactions on Machine Learning Research , 2022. ISSN 2835- 8856. URL https://openreview.net/forum?id=urfWb7VjmL . [5]T. Bricken, A. Reddy, T. Conerly, V . Varma, L. Chan, C. Burns, and N. Nanda. Scal- ing monosemanticity: Learning features that resist polysemanticity in large language mod- els. Transformer Circuits , 2024. URL https://transformer-circuits.pub/2024/ scaling-monosemanticity/index.html . Accessed: April 29, 2025. [6]A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. Advances in neural information processing systems , 29, 2016. [7]D. Erhan, Y . Bengio, A. Courville, and P.
https://arxiv.org/abs/2505.21032v1
Vincent. Visualizing higher-layer features of a deep network. 2009. [8]P. Esser, R. Rombach, and B. Ommer. A disentangling invertible interpretation network for explaining latent representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9223–9232, 2020. [9]T. FEL, V . Boutin, L. Béthune, R. Cadene, M. Moayeri, L. Andéol, M. Chalvidal, and T. Serre. A holistic approach to unifying automatic concept extraction and concept importance estimation. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 54805–54818. Curran Associates, Inc., 2023. [10] M. Goldblum, H. Souri, R. Ni, M. Shu, V . Prabhu, G. Somepalli, P. Chattopadhyay, M. Ibrahim, A. Bardes, J. Hoffman, et al. Battle of the backbones: A large-scale comparison of pretrained models across computer vision tasks. Advances in Neural Information Processing Systems , 36, 2024. [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770– 778, 2016. [12] J. Ho, A. Jain, and P. Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. [13] A. Jaegle, F. Gimeno, A. Brock, O. Vinyals, A. Zisserman, and J. Carreira. Perceiver: General perception with iterative attention. In International conference on machine learning , pages 4651–4664. PMLR, 2021. [14] Y . Li, H. Liu, Q. Wu, F. Mu, J. Yang, J. Gao, C. Li, and Y . J. Lee. Gligen: Open-set grounded text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22511–22521, 2023. [15] Z. Liu, H. Hu, Y . Lin, Z. Yao, Z. Xie, Y . Wei, J. Ning, Y . Cao, Z. Zhang, L. Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 12009–12019, 2022. [16] Z. Liu, H. Mao, C.-Y . Wu, C. Feichtenhofer, T. Darrell, and S. Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 11976–11986, 2022. [17] T. Mikolov, W.-t. Yih, and G. Zweig. Linguistic regularities in continuous space word represen- tations. In L. Vanderwende, H. Daumé III, and K. Kirchhoff, editors, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: 10 Human Language Technologies , pages 746–751, Atlanta, Georgia, June 2013. Association for Computational Linguistics. URL https://aclanthology.org/N13-1090/ . [18] J. X. Morris, V . Kuleshov, V . Shmatikov, and A. M. Rush. Text embeddings reveal (almost) as much as text. In The 2023 Conference on Empirical Methods in Natural Language Processing , 2023. URL https://openreview.net/forum?id=EDuKP7DqCk . [19] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. Advances in neural information processing systems , 29, 2016. [20] J. Pinkney. minisd. https://huggingface.co/justinpinkney/miniSD , 2023. Hugging Face Model Repository. [21] D. Rai, Y . Zhou, S. Feng,
https://arxiv.org/abs/2505.21032v1
A. Saparov, and Z. Yao. A practical review of mechanistic inter- pretability for transformer-based language models. arXiv preprint arXiv:2407.02646 , 2024. [22] J. Rathjens, S. Reyhanian, D. Kappel, and L. Wiskott. Inverting transformer-based vision models. arXiv preprint 2412.06534 , 2024. URL https://arxiv.org/abs/2412.06534 . [23] R. Rombach, P. Esser, and B. Ommer. Making Sense of CNNs: Interpreting Deep Rep- resentations and Their Invariances with INNs , page 647–664. Springer International Pub- lishing, 2020. ISBN 9783030585204. doi: 10.1007/978-3-030-58520-4_38. URL http: //dx.doi.org/10.1007/978-3-030-58520-4_38 . [24] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision , pages 618–626, 2017. [25] J. Vielhaben, S. Bluecher, and N. Strodthoff. Multi-dimensional concept discovery (MCD): A unifying framework with completeness guarantees. Transactions on Machine Learning Research , 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=KxBQPz7HKh . [26] R. Wightman, H. Touvron, and H. Jegou. Resnet strikes back: An improved training procedure in timm. In NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future , 2021. URL https://openreview.net/forum?id=NG6MJnVl6M5 . [27] R. Yi, H. Tian, Z. Gu, Y .-K. Lai, and P. L. Rosin. Towards artistic image aesthetics assessment: A large-scale dataset and a new method. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 22388–22397, June 2023. [28] M. Zeiler. Visualizing and understanding convolutional networks. In European conference on computer vision , volume 1311, 2014. [29] L. Zhang, A. Rao, and M. Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 3836–3847, 2023. [30] T. Zhang, Z. Wang, J. Huang, M. M. Tasnim, and W. Shi. A survey of diffusion based image generation models: Issues and their solutions. arXiv preprint arXiv:2308.13142 , 2023. 11 A Ablation studies A.1 Additional models For the ablation studies, we consider ResNet50B2[26] as second ResNet-model to study the impact of the training procedure, in addition to the models studied in the main text. Choice of conditional encoder Next to the bilinear upsampling followed by a shallow CNN, we explore a different design choice for the conditional encoder: To this end, we explore the use of cross-attention as used in the Perceiver architecture [ 13], which uses a learnable representation of predefined size and connects it via cross-attention to the representation that serves as input for the mapping. Unlike the previous approach, this approach is not subject to any locality assumptions and therefore the most flexible approach that is in particular suitable for model architectures without built-in locality assumptions such as vision transformers. We carry out a comparison between the two conditional encoder models for the case of the SwinV2 ViT, where we expect the impact to be most pronounced as the ViT operates on visual tokens, which might not align well with the upscaling operation in the convolutional encoder. On the contrary, our results indicates that the convolutional encoder yields better classification results and therefore for the following experiments we restrict ourselves to the convolutional encoder. Class-conditional baseline We
https://arxiv.org/abs/2505.21032v1
tested the models on an unconditional baseline to see if MiniSD is able to generate good representations of the classes and if they can be correctly classified. To define the classes for the input prompt as precisely as possible, we use the WordNet hierarchy and create the prompt as follows: ‘a high-quality, detailed, and professional image of a CLASS, which is a kind of SUPERCLASS’ as for example ‘a high-quality, detailed, and professional image of a tench, which is a kind of cyprinid’. The unconditional baseline created consists of 10 samples per class, i.e. 10,000 samples in total. Table 3: Additional model insights : For the different backbones, we indicate the corresponding canonical input sizes as well as the top5(1) accuracy on an unconditional dataset, created by MiniSD. Since only the base model is used for the prediction of these unconditional samples, the results do not differ between unpooled and pooled as well as for the different SwinV2 architectures. We show three performance metrics: Cosine similarity in feature space (cosine-sim), calculated by taking the median of the cosine similarity of all superpixels, top5(1) matches using the top1 prediction of the original sample as ground truth (top5(1) match) and FID-scores (FID) to assess the quality of the generated samples. We consider generative models conditioned on unpooled feature maps (rows 1-5) and models conditioned on pooled feature maps (rows 6-9). Model input MiniSD top5(1) cosine-sim top5(1) match FIDunpooledResNet50224×22489% (68%) 0.40 87% (63%) 12.32 ResNet50B 88% (65%) 0.50 90% (68%) 18.58 ConvNeXt 288×288 92% (71%) 0.61 94% (76%) 7.14 SwinV2384×384 93% (72%)0.57 95% (81%) 11.56 SwinV2(Perceiver) 0.37 79% (55%) 22.72pooledResNet50224×22489% (68%) 0.09 47% (22%) 26.37 ResNet50B 88% (65%) 0.14 44% (21%) 37.44 ConvNeXt 288×288 92% (71%) 0.18 42% (18%) 31.01 SwinV2 384×284 93% (72%) 0.14 47% (23%) 36.21 A.2 FID scores vs. Guidance Scale In Fig. 6, we present several original images alongside their respective reconstructions, generated using the model trained with the ConvNeXt backbone. These samples were produced with varying levels of guidance scale. Since the guidance scale significantly influences the output, we evaluated the model’s performance across different scale settings, again using the ConvNeXt-based model. To quantify the quality of the generated images, we employed FID and cosine distance, as illustrated 2timm model weights: resnet50.a1_in1k 12 in Fig. 7. We opted to use cosine distance (rather than similarity) to ensure that, in both metrics, lower values indicate better performance, simplifying comparison. This suggests that the unpooled feature map requires minimal text guidance, whereas for the pooled variant, guidance is of substantial importance, as natural looking images cannot be generated without its influence. OriginalGuidance scale1.01.753.05.08.0 unpooledunpooledunpooledpooledpooledpooled 0.60 0.60 0.430.69 0.60 0.480.58 0.57 0.530.52 0.47 0.510.46 0.45 0.470.15 0.26 0.060.20 0.29 0.020.12 0.20 0.080.08 0.17 0.050.07 0.07 0.07 Figure 6: Unpooled and pooled reconstructions with different guidance scales from ConvNeXt feature maps . The cosine similarity between the original feature map and that of the reconstruction is noted at the bottom edge of the images. While increasing the guidance scale in unpooled samples results in higher color saturation and more distorted object shapes, the same
https://arxiv.org/abs/2505.21032v1
increase improves object fidelity in pooled feature maps. This trend is also roughly reflected in the corresponding cosine distances. 0 1 2 3 4 5 6 7 8 9 10 11 12 13020406080 guidance scaleFID score unpooled FID pooled FID0.40.60.81 cosine distance unpooled cos-dist pooled cos-dist Figure 7: Impact of guidance scale on generation quality (FID) and semantic alignment (cos- dist). FID values (solid lines) and cosine distance (dashed lines) for models with ConvNeXt backbone with and without pooling. Lower values in both metrics indicate better performance. The unpooled model (blue) achieves optimal quality at a guidance scale of 1.75, with both low FID and low cosine distance. In contrast, the pooled model (red) performs significantly worse at all scales, with increasing FID and cosine distance as the guidance scale decreases. This model reaches its best performance near a scale of 8.0. 13 A.3 Generation on unpooled vs. pooled feature maps We analyzed the generated samples for each model using both unpooled and pooled feature maps. As shown in Tables 1 and 3, the samples generated from pooled feature maps consistently exhibit lower visual quality compared to those generated from unpooled feature maps. This indicates that preserving spatial detail during feature extraction is crucial for high-quality image generation. Fig. 8 illustrates this difference with examples from the class zebra, clearly highlighting the superior fidelity of samples generated from unpooled features. Originalunpooledpooled Figure 8: Comparison of unpooled and pooled generated samples showing examples of the class zebra using. All samples were generated using the model trained with the ConvNeXt backbone. The unpooled feature maps result in sharper and more realistic generations, while the pooled versions show a noticeable loss in detail and structure, mostly showing a general context of the original image. The cosine similarity between the original feature map and that of the reconstruction is noted at the bottom edge of the images B OOD evaluation In Fig. 9, we qualitatively test the robustness of our findings by conditioning on samples that are slightly out of the model scope of the original models finetuned on ImageNet. To this end, we use the BAID [ 27] dataset, which differs in style from the samples in ImageNet. The ControlNet trained on ConvNeXt features still shows a good reconstruction quality of the semantic content but the style of the reconstruction and the original image differ more strongly than in the in-distribution case in Fig. 3. Nevertheless, the results speak for the robustness of the proposed approach. OriginalReconstructed 0.560.510.760.480.630.43 0.480.53 Figure 9: Reconstruction on OOD samples . Comparison of randomly selected original and generated samples from the BAID [ 27] dataset, which differs in style from the samples in ImageNet. The basis for the reconstruction was the ControlNet trained on ConvNeXt features, which received the ConvNeXt features of the samples shown as input. The cosine similarity between the original feature map and that of the reconstruction is noted at the bottom edge of the images 14 C Details on concept steering For concept discovery, we rely on multi-dimensional concept discovery (MCD) [ 25]. For every feature
https://arxiv.org/abs/2505.21032v1
vector ϕ, MCD provides a concept decomposition ϕ=Pnc+1 i=1ϕi, where ϕiis associated with the concept i(of in total ncconcepts), which represents a linear subspace of the feature space, and concept nc+ 1corresponds to the orthogonal complement of the span of all concept subspaces. The latter is necessary to achieve a complete decomposition not explicitly captured through concept subspaces. For a given feature vector, one can now quantify the contribution ϕifrom concept iand visualize its magnitude |ϕi|2across the entire feature map to obtain a spatially resolved concept activation map. One option to align such a coarse concept activation map with the input image is to use bilinear upsampling. This process often leads to rather diffuse concept activation maps. Even though we use MCD for demonstration, this alignment step is a common challenge for most concept-based attribution maps. We used the learned mapping from feature space to input space to infer high-resolution concept visualizations. To this end, the component ϕiassociated with concept iwas multiplied by 0.25 to attenuate them. In our experiments, decreasing the feature values worked better than increasing them. We speculate that increasing feature vector components can eventually result in feature vectors that exceed the magnitude of feature vectors seen during training of the FeatInv model. To obtain higher-resolution representations, we used FeatInv to reconstruct five samples for each concept using both the original feature map and the concept-manipulated feature map. Using the same random seed for each pair ensured that the original and manipulated reconstructions were directly comparable, with differences attributable solely to the feature manipulation. For each pair, we computed the pixel-wise difference, and to produce a representative difference map for each concept, we took the median across the five resulting difference maps. This yielded a high-resolution (256 ×256) activation map that highlights the specific regions of the image affected by the manipulation. D Spatial composition In Fig. 10, we show spatially composed combinations of two feature maps. The results indicate that feature maps exhibit a very local influence, which aligns well with the simple upscaling of the feature map resolution to the input resolution. Originaltop leftfullcenterbottom rightfullOriginal 0.49 0.49 0.62 0.62 0.86 0.86 0.72 0.72 0.59 0.59 0.88 0.88 0.66 0.66 0.59 0.59 0.41 0.31 0.43 0.45 0.78 0.58 0.37 0.69 0.58 0.46 0.64 0.33 0.46 0.67 0.48 0.39 0.79 0.67 0.33 0.75 0.55 0.90 0.59 0.58 0.81 0.54 0.71 Figure 10: Reconstructions of spatially composed mixtures of two ConvNeXt feature maps . The cosine similarity between the manipulated map and that of the reconstruction is noted at the bottom edge of the images. The yellow outlines show the part of the feature map that was manipulated 15
https://arxiv.org/abs/2505.21032v1
RainFusion: Adaptive Video Generation Acceleration via Multi-Dimensional Visual Redundancy Aiyue Chen1*, Bin Dong1*, Jingru Li1 Jing Lin1,Yiwu Yao1, Gongyi Wang1 1Huawei Technologies Co., Ltd Abstract Video generation using diffusion models is highly compu- tationally intensive, with 3D attention in Diffusion Trans- former (DiT) models accounting for over 80% of the total computational resources. In this work, we introduce Rain- Fusion , a novel training-free sparse attention method that exploits inherent sparsity nature in visual data to accel- erate attention computation while preserving video qual- ity. Specifically, we identify three unique sparse patterns in video generation attention calculations–Spatial Pattern, Temporal Pattern and Textural Pattern. The sparse pat- tern for each attention head is determined online with neg- ligible overhead (˜ 0.2%) with our proposed ARM (Adap- tive Recognition Module) during inference. Our proposed RainFusion is a plug-and-play method, that can be seam- lessly integrated into state-of-the-art 3D-attention video generation models without additional training or calibra- tion. We evaluate our method on leading open-sourced models including HunyuanVideo, OpenSoraPlan-1.2 and CogVideoX-5B, demonstrating its broad applicability and effectiveness. Experimental results show that RainFusion achieves over 2×speedup in attention computation while maintaining video quality, with only a minimal impact on VBench scores (-0.2%). 1. Introduction Diffusion models have become the leading approach in video generation, demonstrating exceptional performance and broad applicability [1] [9] [28] [24]. Initially built on U-Net architectures [1] [9], the field has transitioned to Diffusion Transformers (DiTs), which now serve as the mainstream approach owing to their enhanced performance and scalability. This architectural evolution has further advanced with the adoption of 3D full-sequence attention mechanisms [24] [28], replacing the previously dominant *These authors contributed equally. Figure 1. HunyuanVideo 720p RainFusion results. RainFusion and RainFusion combined with ∆-DiT shows good visual quality and high similarity to dense results. Upper prompt: “A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage.”. Lower prompt: “A litter of golden retriever puppies playing in the snow. Their heads pop out of the snow, covered in.”. 2D+1D spatial-temporal attention (STDiT) [21] that sepa- rately computes spatial and temporal attention alternatively. Although these advancements have enhanced modeling ca- pabilities, they also impose substantial computational chal- lenges, particularly in attention computation. The computational complexity of these models scales quadratically with the sequence length, expressed as O(s2t2), where sandtrepresent the spatial and temporal dimensions, respectively. This scaling poses a substantial bottleneck, as evidenced by the deployment of Open-Sora- Plan 1.2 [26] on a single A100 GPU, which requires ap- proximately 48 minutes to generate a 4-second 720p video.arXiv:2505.21036v1 [cs.CV] 27 May 2025 Figure 2. (a) RainFusion pipeline including Adaptive Recognition Module(ARM) and applying sparse pattern to Flash Attention. (b) ARM determine the pattern using subset of query and key to calculate approximates attention score and applying the predefined pattern mask to get attention recall to determine the head category. The sampled queries Q′and keys K′are either sourced from the tokens of the first frame or obtained by sampling from the full set of tokens with equal intervals. (c) The three head sparse pattern. These three heads
https://arxiv.org/abs/2505.21036v1
respectively concentrate on portraying global spatial details with local temporal information, local spatial details with global temporal information, and high-level textural information. Profiling analysis demonstrates that the attention mecha- nism consumes over 80% of total computation, making it the principal performance bottleneck in the video genera- tion pipeline. To improve the computational efficiency of video gen- eration models, researchers have developed two key algo- rithms: (1) sampling optimization techniques that reduce the number of required inference steps through adaptive sampling schedules [15] [30], and (2) caching mechanisms that exploit redundancy by reusing features across adjacent timesteps [6] [13] [16]. Sampling optimization techniques are inherently limited by their dependency on post-training adjustments, limiting their practical applicability. Further- more, both sampling optimization and caching algorithms necessitate models to operate with a relatively large number of inference steps, as their effectiveness relies heavily on sufficient redundancy between consecutive timesteps. De- spite these advancements, optimizing the attention mecha- nism has not yet been explored in depth. DiTFastAttn[40] use brute-force sliding-window mask and use residual cache to compensate quality loss. SVG[38] ignores model gener- ality and the inherent visual feature in video.In this work, we introduce a novel sparse attention mech- anism that effectively leverages two key characteristics of video generation: (1) the inherent spatial-temporal redun- dancy in video, and (2) the importance of specific image texture. We observe that there exists three types of sparse pattern in attention, one for temporal pattern which attends to the same spatial location in different frames, one for spa- tial pattern which models all spatial location in consecutive frames, the other for detailed texture of video frames. As shown in Fig.3, it is the attention score-map of some heads with the vertical axis and the horizontal axis representing q and k respectively. The first row captures local repeti- tive patterns within each window, which we define as the Temporal Head , indicating that certain heads consistently attend to the same locations across different frames. The second row reveals more global patterns across neighboring frames, which we term the Spatial Head . The third row highlights Textural Heads , where important tokens are at- tended to by all query tokens. We determine the sparse pat- tern for each head online using Adaptive Recognition Mod- ule(ARM) which only introduce1 t2overhead where t rep- resents frame number in latent space. The overall pipeline Figure 3. The attention sparsity pattern with the vertical axis and the horizontal axis representing query and key respectively. The first row depicts the temporal sparsity pattern, which models the same spatial location across different frames (with the red box in the upper-left corner highlighting the basic repeated pattern). The second row shows the spatial sparsity pattern, focusing on all loca- tions in neighboring frames. The third row presents a conventional full-attention head, for which we propose a sophisticated textural sparse attention mechanism. is shown in Fig.2. We first determine sparse pattern of dif- ferent head online using global or local sampling, and then calculate attention using their respective sparse pattern. Extensive experiments on different video generation models including OpenSoraPlan-1.2
https://arxiv.org/abs/2505.21036v1
[26], HunyuanVideo- 13B [14], CogVideoX-5B [39] prove the generality and ef- fectiveness of RainFusion. The contributions of this paper include: • We present RainFusion , a novel plug-and-play frame- work that leverages tri-dimensional sparsity across spa- tial, temporal, and textural domains to optimize video dif- fusion models. The proposed method dynamically de- termines sparse patterns through online estimation, ef- fectively exploiting the intrinsic redundancy inherent in video data. The name RainFusion is derived from the observation that the sparse patterns resemble the continu- ous, interconnected lines formed by rain. • We put forward a simple but potent sparse pattern esti- mation method ARM that entails minimal computational cost (˜ 0.2% overhead), thereby rendering our RainFusion highly efficient. • RainFusion can be applied to many SOTA video genera- tion models, OpenSoraPlan-1.2 [26], HunyuanVideo-13B [14], CogVideoX-5B [39] with over 2x speedup in atten- tion at negligible quality loss (-0.2% VBench score) as shown in Fig. 5.2. Related Work 2.1. Diffusion Models Diffusion models [4, 8, 10, 18, 22, 25, 42] have surpassed Generative Adversarial Networks (GANs) in generative tasks by iteratively reversing a noisy process to synthesize data, such as images, through progressive denoising. These models typically use U-Net [2, 27, 29] or transformer-based architectures [25], with the latter gaining prominence in vi- sion applications, as seen in DiT (Diffusion Transformers) [25] for data distribution modeling and PixArt- Σ[4] for 4K image generation. Furthermore, diffusion models have been extended to video synthesis [2], with two main approaches emerging: (1) the 2D+1D STDiT structure, used in Open- Sora [42], and (2) the 3D full-sequence attention mech- anism, employed by Sora [24], Open-Sora-Plan 1.2 [26], CogVideoX [39], and Hunyuan Video [14]. These develop- ments underscore the versatility and scalability of diffusion models in tackling increasingly complex generative tasks. 2.2. Sparse Attention in Transformers In Transformer-based large models, the quadratic complex- ity of matrix multiplication QKTin attention mechanisms drives high computational costs. To address this, recent re- search exploits sparsity in attention maps [12, 32, 41], and some research use techniques like token pruning [31, 33, 35] and token merging [3, 34, 37] to reduce sequence length and improve inference efficiency. Some methods employ dynamic sparse attention [12] or merge sparse tokens [32] to accelerate LLM inference. Similarly, in vision-specific models like ViTs and DiTs, sparsity is leveraged through dynamic activation pruning [7], pixel downsampling [31], and KV matrix downsampling [33], while video genera- tion adapts token merging via bipartite soft matching [3], importance sampling [37], and spectrum-preserving tech- niques [34]. These advancements highlight the broad poten- tial of sparse attention to enhance efficiency across diverse domains. 2.3. Attention Sharing and Cache When accelerating diffusion model inference, cache meth- ods leverage attention map similarity between adjacent de- noising timesteps [16, 19, 36]. For example, ∆-DiT [5] introduces a tailored caching method for DiT acceleration, while DeepCache [20] and TGATE [17] reduce redundant calculations by layer-wise attention similarities. Recent methods further optimize performance by caching model outputs [16] or dynamically adjusting caching strategies [13]. Additionally, techniques like DiTFastAttn [40] com- bine sparse attention with caching, exploiting spatial, tem-
https://arxiv.org/abs/2505.21036v1
poral, and conditional redundancies for efficient attention compression. These advancements demonstrate the poten- tial of integrating sparse attention and caching to enhance the scalability and speed of diffusion model inference. Recent Work. SVG [38] advances sparse attention re- search by analyzing spatial and temporal attention sparsity in DiTs and proposing a training-free online profiling strat- egy. However, they classify attention heads only into tem- poral and spatial groups, neglecting irregular attention pat- terns in video generation. Unlike SVG, our work focuses on irregular attention heads to capture fine-grained textural details for improved video generation. 3. Methodology In this section, we introduce RainFusion , a training-free adaptive algorithm, designed to exploit the computing spar- sity in 3D full attention to accelerate video generation. 3.1. Preliminary Existing video generation models utilize 3D full attention mechanisms, which jointly capture both spatial and tempo- ral dependencies to elevate generation quality. However, it comes at a high computational cost. We define the shape of the latent video as (H, W, T ). In 3D full attention, the video sequence is formed by flatten- ing T sub-sequences, each sub-sequence represents a single frame of length H×W. We denote (Q, K, V )∈RN×d as the query, key, and value tokens, respectively, and de- fine M as the attention mask with shape N×N, where N=H×W×Tanddis the hidden dimension of each head. The bidirectional 3D full attention can be formulated as follows: S(Q, K, M )←Softmax (QKT √ d+M) (1) Attn(Q, K, V, M )←S(Q, K, M )V (2) The computation complexity is O(N2). While 3D full attention mechanisms are inherently dense, our analysis re- veals discernible computational sparsity patterns across at- tention heads. As shown in Fig.3 and Fig. 2 (c), we classify these specialized heads into three categories: Spatial head, Temporal head, and Textural head. 3.2. Attention Head Mechanism Design Spatial Head The Spatial Head exhibits global spatial de- pendencies within individual frames while capturing local- ized temporal dependencies across the full sequence. This characteristic indicates that the Spatial Head emphasizes both the completeness of individual frames and the over- all coherence among adjacent or key frames. Consequently, it suggests that certain non-key frames hold relatively less significance and can be excluded from the attention calcu- lation. Attn spatial ←Attn(Qf, K{f′}, V{f′}, Mspatial )(3) Figure 4. The above figure shows the attention score map of a typ- ical textural head. The green region represents a single frame. Notably, the pink region, characterized by high attention score for most Q, coincides with the motion regions emphasized by the prompt. Here,{f′}denotes the set of significant frames for the fth frame. Therefore, a global striped attention mask Mspatial is designed as depicted in Fig.2 (c). A continuous sub- sequence of a frame is defined as a window segment. The positions of these window segments determine both the key frame attended by the attention mechanism and the result- ing computational gains. Temporal Head Contrary to the Spatial Head, the Tem- poral Head demonstrates locality within a single-frame sub- sequence in spatial domain, while exhibits a global charac- teristic in whole temporal domain. The
https://arxiv.org/abs/2505.21036v1
Temporal Head is particularly attentive to the correlation between the same lo- cal regions across different video frames. Its primary focus is on creating regional details that maintain spatial continu- ity. This unique property can lead to the manifestation of local sparsity within a single-frame sub-sequence and peri- odic sparsity throughout the entire sequence, as shown in Fig.2 (c). Textural Head It becomes evident that certain content holds significant importance throughout the entire video, particularly those parts intricately linked to high-level textu- ral description, which is shown in Fig.4. This is manifested in the fact that some specific K, V consistently receive high attention scores for most Q. As a result, while the distribu- tion of tokens is sparse, it is challenging to identify a reg- ular attention mask that can effectively adapt to this spar- sity state. Based on the above considerations, we condense theK, V sequence approximately by referring to the prop- erty of image downsampling. The K, V sequence will be rearranged and tokens will be retained in a checkerboard - interleaving pattern in spatial domain, as depicted in Fig.2 (c). C={aij|((imod τ=k)∧(jmod τ=k)), 0≤i < H, 1≤j < W, 0≤k < τ}(4) Attn Irregular →Attn(Q, K{C}, V{C}, Minit)(5) Cindicates the set of chosen K, V token indexes, τrep- resents the stride of the checkerboard, and Minitbe the all- zero mask. The checkerboard format ensures that informa- tion from each discarded token in the spatial domain can be implicitly generated by referring to the four nearest remain- ing tokens. Additionally, we opt to directly retain or discard some tokens rather than averaging them. This is because av- eraging would obscure the intrinsic information of tokens, making it challenging to implicitly supply the correct infor- mation for discarded tokens. 3.3. Adaptive Recognition Module(ARM) As described in Sec 3.2, RainFusion categorizes all heads into three distinct types: Spatial Head, Temporal Head, and Textural Head. However, we find that the pattern of each head is highly dynamic. For instance, factors such as input prompts and sampling steps all influence the characteristics of each head. Given these considerations, we introduce an Adaptive Recognition Module (ARM). This module is de- signed to do online and adaptive classification for all heads with minimal computational cost. We first acquire the approximate attention score, and then compute the masked attention recall. As illustrated in Fig 2 (b), in local sampling, we select the tokens of the first frame sub-sequence as bQ,bK.In the case of global sampling, we sample tokens at equal intervals ωaseQ,eK. We utilize the downsampled sequences to calculate the attention score, which serves as an approximation of the overall attention score. Then we compute the masked recall based on the approximate score: R′←Recall (Q′, K′, M′) =S(Q′, K′, M′) S(Q′, K′, Minit)(6) Q′, K′represent the downsampled sequences. M′de- notes the attention mask derived by downsampling either Mspatial orMtemporal in accordance with the correspond- ing token downsampling rules. Smeans softmax operation as shown in Equation Attention recall means the proportion of valid information that can be preserved under the current pattern mask. Through this method, we
https://arxiv.org/abs/2505.21036v1
are able to adap- tively and efficiently determine the category of each head online with minimal computational overhead. Algorithm 1 provides a detailed introduction to the pro- cess of the Adaptive Recognition Module (ARM). 4. Experiments 4.1. Settings Models We evaluate RainFusion on three widely adopted video generation models: OpenSoraPlan-1.2 [26], HunyuanVideo-13B [14] and CogVideoX-5B [39]. For HunyuanVideo and OpenSoraPlan-1.2, we generate 125 and 93 frames at 480p resolution, with latent dimensions ofAlgorithm 1 Adaptive Recognition Module(ARM) Input Q, K, M spatial , Mtemporal Output H#Head Category bQ,bK,cMtemporal ←LocalSampling (Q, K, M temporal ) eQ,eK,fMspatial ←GlobalSampling (Q, K, M spatial ) bR←Recall (bQ,bK,cMtemporal ) eR←Recall (eQ,eK,fMspatial ) if(bR≥α)then H←Temporal Head # high priority for Temporal else if (eR≥α)then H←Spatial Head else H←Textural Head end if return H (32, 30, 40) and (24, 30, 40) after V AE downsampling and patch embedding, respectively. For CogVideoX-5B, 45 frames are generated at 480×720, corresponding to a latent shape of (12, 30, 45). Datasets and Benchmarks VBench [11] is a comprehen- sive benchmark suite for video generation tasks, systemati- cally decomposing generation quality into 16 distinct evalu- ation dimensions. It further computes three weighted aggre- gated scores derived from these dimensions to holistically assess model performance. It consists of 946 prompts for all dimension evaluation. Video generation is a computation- heavy tasks, generating a four second 480p video costs for about 3 minutes. So we only use one random seed instead of five in all the following experiments. In the ablation study, for the sake of accelerating the experiments, we utilize 48 Sora prompts [23]. And we use all VBench 946 prompts when comparing with other methods in section 4.3. Baselines We show the effectiveness and efficiency of RainFusion to compare it with other sparse or cache-based methods, including DiTFastAttn [40], ∆-DiT[6]. For DiT- FastAttn, we use their official configurations. For ∆-DiT, we use the similar accelerate rate of RainFusion. For Rain- Fusion, we set the sparsity to 50% and we keep the first 10% timesteps using dense calculation, which corresponds to about 1.85 ×speedup in attention. Specifically, we set bandwidth =1 4in both local and global pattern, correspond- ing to9 16computation reduction. As for textural pattern, we reduce the key value tokens by half using the checkerboard layout as in Section 3.2. Model Method Loss Quality Score ↑Semantic Score ↑Total Score ↑Speedup baseline / 82.04 70.7 79.77 1.0 × ∆-DiT -5.37 76.56 65.76 74.4 1.81 × DiTFastAttn -5.36 77.94 60.27 74.41 1.52 ×CogvideoX-5B RainFusion -0.28 81.64 70.91 79.49 1.85× baseline / 79.6 38.01 71.28 1.0 × ∆-DiT -0.93 78.51 37.7 70.35 1.81 × DiTFastAttn -2.31 77.56 34.59 68.97 1.42 ×OpenSoraPlan-1.2 RainFusion -0.32 79.08 38.44 70.95 1.91× baseline / 84.11 70.12 81.31 1.0 × ∆-DiT -0.87 83.27 69.08 80.44 1.81 × DiTFastAttn -1.14 82.87 69.37 80.17 1.78 × RainFusion -0.4 83.77 69.46 80.91 1.89× RainFusion+ -0.19 83.79 70.46 81.12 1.84×HunyuanVideo RainFusion+ & ∆-DiT -0.49 83.43 70.35 80.82 2.37× Table 1. Comparison with state-of-the-art algorithms. Model S T Te LAverage LossSubject Consistency ↑Background Consistency ↑Motion Smoothness ↑Dynamic Degree ↑Aesthetic Quality ↑Imaging Quality ↑ / 93.48 95.24 97.19
https://arxiv.org/abs/2505.21036v1
45.83 58.12 64.79 ✓ ✓ ✓ -1.05 92.87 94.65 97.47 45.83 56.58 60.91 ✓✓✓✓ -0.18 93.27 95.31 97.23 45.83 58.11 63.80CogvideoX-5B ✓ ✓ ✓ -0.42 93.08 95.40 97.27 45.83 57.17 63.36 / 94.65 95.19 99.40 41.67 56.84 57.67 ✓ ✓ ✓ -1.29 92.53 94.72 98.94 43.75 55.05 52.66 ✓✓✓✓ 1.03 93.22 94.31 99.15 52.08 55.67 57.17OpenSoraPlan-1.2 ✓ ✓ ✓ 0.33 92.73 94.69 99.18 50.00 55.12 55.71 Table 2. Ablation Results. RainFusion with three pattern and local estimation achieves the best result. We denote S, T, Te, L as spatial head, temporal head, textural head and use local sampling in estimating local pattern recall, respectively. 4.2. Ablation Study Component-wise Analysis For RainFusion, there exists three kind of heads (Spatial, Temporal, Textural) as shown in Fig.2 (c). We use ARM in Fig.2 (b) to determine the pattern for each head. We do ablation study on the ef- fectiveness of each head and how to estimate the patterns. We observe that there exists local sparse pattern inner the global sparse pattern as shown in the second row of Fig.3. So if a head is above recall rate for both local and global pattern, we use local pattern first to cover more active re- gion. We use global sampling as described in 3.3 to get global pattern recall. To estimate local pattern recall, we compare the global sampling method and local sampling method(considering only the first frame tokens). As shown in Tab.2, we can see that with all three mask and with local estimation get the best results. Specifically, for CogVideoX-5B, using only two mask(same as SVG[38] which only uses spatial and temporal head), the loss is 1.05%. When adding textural head, we can get the best VBench score with the average loss 0.18%. But if we change local estimate to global estimate, the loss is 0.42%. And as shown in Fig.6, we can see that Rain- Fusion using three masks outperform using only two mask similar to SVG[38]. Videos using our method is preserve more details with better imaging quality. For OpenSoraPlan-1.2, it shows similar results. Us- ing only two mask method drops by 1.29%, when adding textural head, the result even perform better than baseline method. But when using global sampling, the result is a little worse, 0.33% compared to 1.03% improvement with local sampling in VBench score. So we use all three mask and local sampling as our default RainFusion configuration in the following sections. Figure 5. Video Comparison using CogVideoX-5B with different accelerating algorithms. Left prompt: “A steam train moving on a mountainside.” Right Prompt: “a zebra on the left of a giraffe, front view.”. Figure 6. RainFusion video comparisons on CogVideoX-5B. Two Head means only use spatial and temporal head similar to SVG. We can see that RainFusion performs better than SVG and similar to baseline with 1.85x speedup. Left prompt: “Animated scene features a close-up of a short fluffy monster kneeling beside a melting red candle”. Right prompt: “A petri dish with a bamboo forest growing within it that has tiny red pandas running around.”. Parameter Sensitivity We test different
https://arxiv.org/abs/2505.21036v1
sparse ratio by using different bandwidth and stride in spatial-temporal head and textural head, respectively. We test different Rain- Fusion configuration in CogVideoX-5B of speedup 2.5 × and 3.0 ×by setting the bandwidth of spatial and tempo- ral head to 0.18 and 0.13, and textural stride to 3 and 4, respectively. As shown in Tab.3, our default setting can achieve 1.85 ×speedup with 0.21% average loss, while for 2.5×and 3.0 ×speedup, the loss is 0.56% and 1.35%. The 2.5×RainFusion performs better than only local and global mask shown in Tab.2 first row of CogVideoX-5B part, sim-ilar to SVG(1.85 ×speedup). As for 3.0 ×speedup, the loss is 1.35%. It shows that our method can achieve more speedup with a tradeoff of accuracy. We can conclude that our method shows robust performance for different speedup ratio as shown in Fig.7. 4.3. Comparison with Baselines Quantitative Results We compare RainFusion with sparse method DiTFastAttn and cache-based method ∆- DiT. The result is shown in Table 1. We can see that in sim- ilar speedup, RainFusion performs best among other accel- MethodAverage LossSubject Consistency ↑Background Consistency ↑Motion Smoothness ↑Aesthetic Quality ↑Imaging Quality ↑ baseline / 93.48 95.24 97.19 58.12 64.79 1.85×RainFusion -0.21 93.27 95.31 97.23 58.11 63.81 2.50×RainFusion -0.56 92.80 95.08 97.14 57.56 63.40 3.00×RainFusion -1.36 92.36 94.85 97.27 55.20 62.30 Table 3. Parameter Sensitivity Experiment Results. Figure 7. Video Comparison using CogVideoX-5B with different speedup ratio. Left prompt: “A cat waking up its sleeping owner demanding breakfast. ” Right Prompt: “An extreme close-up of an gray-haired man with a beard in his 60s.”. eration methods. Specifically, for CogVideoX-5B, RainFu- sion only drops by 0.28% in VBench total score, while DiT- FastAttn and ∆-DiT drop by 5.37% and 5.36% respectively. For OpenSoraPlan-v1.2 and HunyuanVideo, the results is similar that our RainFusion performs best with -0.32% and -0.4% total score loss, respectively. Qualitative Analysis and Integrability As shown in Fig.5, RainFusion achieves the best visual quality among all methods while DiTFastAttn and ∆-DiT suffer from noise patch or inconsistency subjects. Notably, RainFusion is orthogonal to other acceleration approaches like cached- based method and can be combined to achieve a multiplica- tive speedup as shown in Fig.1. For example, integrating RainFusion-1.84 ×with∆-DiT 1.3 ×yields a total speedup of 2.4×on HunyuanVideo. As detailed in Tab.2, RainFu- sion+ variant employs dynamic bandwidth selection (0.5, 0.25, 0.125) across different attention heads. We determinethe optimal bandwidth for each head by selecting the mini- mum value that maintains 90% recall. Combining RainFu- sion+ with ∆-DiT results in a minor -0.49% performance drop, demonstrating its practical feasibility. 5. Conclusion We introduce RainFusion, which utilizes spatial, tem- poral and textural sparsity in video generation models. Experiments demonstrate that RainFusion can achieve significant speed up in several video generation models with negligible quality loss (˜ 0.2% loss on VBench score). Our method is training-free and calibration-free, making it a plug-and-play tools to speed up video gen- eration models. For future work, we will dive deeper to improve the sparsity ratio while preserving video qual- ity and try to improve the video quality with fine-tuning. References [1]
https://arxiv.org/abs/2505.21036v1
Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram V oleti, Adam Letts, Varun Jampani, and Robin Rombach. Stable video diffusion: Scaling latent video diffusion models to large datasets, 2023. 1 [2] Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram V oleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127 , 2023. 3 [3] Daniel Bolya and Judy Hoffman. Token merging for fast sta- ble diffusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 4599–4603, 2023. 3 [4] Junsong Chen, Chongjian Ge, Enze Xie, Yue Wu, Lewei Yao, Xiaozhe Ren, Zhongdao Wang, Ping Luo, Huchuan Lu, and Zhenguo Li. Pixart- σ: Weak-to-strong training of diffusion transformer for 4k text-to-image generation. In European Conference on Computer Vision , pages 74–91. Springer, 2024. 3 [5] Pengtao Chen, Mingzhu Shen, Peng Ye, Jianjian Cao, Chongjun Tu, Christos-Savvas Bouganis, Yiren Zhao, and Tao Chen. δ-dit: A training-free acceleration method tailored for diffusion transformers. arXiv preprint arXiv:2406.01125 , 2024. 3 [6] Pengtao Chen, Mingzhu Shen, Peng Ye, Jianjian Cao, Chongjun Tu, Christos-Savvas Bouganis, Yiren Zhao, and Tao Chen. δ-dit: A training-free acceleration method tai- lored for diffusion transformers, 2024. 2, 5 [7] Xuanyao Chen, Zhijian Liu, Haotian Tang, Li Yi, Hang Zhao, and Song Han. Sparsevit: Revisiting activation spar- sity for efficient high-resolution vision transformer. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2061–2070, 2023. 3 [8] Prafulla Dhariwal, Jonathan Ho, Ajay Jain, and Pieter Abbeel. Guided diffusion models. In NeurIPS , 2022. 3 [9] Roberto Henschel, Levon Khachatryan, Daniil Hayrapetyan, Hayk Poghosyan, Vahram Tadevosyan, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Streamingt2v: Con- sistent, dynamic, and extendable long video generation from text, 2024. 1 [10] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising dif- fusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. 3 [11] Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, Yaohui Wang, Xinyuan Chen, Limin Wang, Dahua Lin, Yu Qiao, and Ziwei Liu. Vbench: Com- prehensive benchmark suite for video generative models, 2023. 5 [12] Huiqiang Jiang, Yucheng Li, Chengruidong Zhang, Qianhui Wu, Xufang Luo, Surin Ahn, Zhenhua Han, Amir H. Abdi, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. Minference 1.0: Accelerating pre-filling for long-context llms via dynamic sparse attention, 2024. 3[13] Kumara Kahatapitiya, Haozhe Liu, Sen He, Ding Liu, Menglin Jia, Chenyang Zhang, Michael S. Ryoo, and Tian Xie. Adaptive caching for faster video generation with dif- fusion transformers, 2024. 2, 3 [14] Weijie Kong, Qi Tian, Zijian Zhang, Rox Min, Zuozhuo Dai, Jin Zhou, Jiangfeng Xiong, Xin Li, Bo Wu, Jianwei Zhang, Kathrina Wu, Qin Lin, Junkun Yuan, Yanxin Long, Aladdin Wang, Andong Wang, Changlin Li, Duojun Huang, Fang Yang, Hao Tan, Hongmei Wang, Jacob Song, Jiawang Bai, Jianbing Wu, Jinbao Xue, Joey Wang, Kai Wang, Mengyang Liu, Pengyu Li, Shuai Li, Weiyan Wang,
https://arxiv.org/abs/2505.21036v1
Wenqing Yu, Xinchi Deng, Yang Li, Yi Chen, Yutao Cui, Yuanbo Peng, Zhentao Yu, Zhiyu He, Zhiyong Xu, Zixiang Zhou, Zun- nan Xu, Yangyu Tao, Qinglin Lu, Songtao Liu, Dax Zhou, Hongfa Wang, Yong Yang, Di Wang, Yuhong Liu, Jie Jiang, and Caesar Zhong. Hunyuanvideo: A systematic framework for large video generative models, 2025. 3, 5 [15] Lijiang Li, Huixia Li, Xiawu Zheng, Jie Wu, Xuefeng Xiao, Rui Wang, Min Zheng, Xin Pan, Fei Chao, and Rongrong Ji. Autodiffusion: Training-free optimization of time steps and architectures for automated diffusion model accelera- tion, 2023. 2 [16] Feng Liu, Shiwei Zhang, Xiaofeng Wang, Yujie Wei, Haonan Qiu, Yuzhong Zhao, Yingya Zhang, Qixiang Ye, and Fang Wan. Timestep embedding tells: It’s time to cache for video diffusion model, 2024. 2, 3 [17] Haozhe Liu, Wentian Zhang, Jinheng Xie, Francesco Fac- cio, Mengmeng Xu, Tao Xiang, Mike Zheng Shou, Juan- Manuel Perez-Rua, and J ¨urgen Schmidhuber. Faster diffu- sion via temporal attention decomposition. arXiv e-prints , pages arXiv–2404, 2024. 3 [18] Nanye Ma, Shangyuan Tong, Haolin Jia, Hexiang Hu, Yu- Chuan Su, Mingda Zhang, Xuan Yang, Yandong Li, Tommi Jaakkola, Xuhui Jia, et al. Inference-time scaling for diffu- sion models beyond scaling denoising steps. arXiv preprint arXiv:2501.09732 , 2025. 3 [19] Xinyin Ma, Gongfan Fang, Michael Bi Mi, and Xinchao Wang. Learning-to-cache: Accelerating diffusion trans- former via layer caching. arXiv preprint arXiv:2406.01733 , 2024. 3 [20] Xinyin Ma, Gongfan Fang, and Xinchao Wang. Deepcache: Accelerating diffusion models for free. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024. 3 [21] Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, and Yu Qiao. Latte: La- tent diffusion transformer for video generation, 2024. 1 [22] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International conference on machine learning , pages 8162–8171. PMLR, 2021. 3 [23] Open-Sora. Sora prompt, 2024. 5 [24] OpenAI. Openai. sora, 2024. 1, 3 [25] William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF Inter- national Conference on Computer Vision , pages 4195–4205, 2023. 3 [26] Open-Sora Plan. Open-sora plan, 2024. 1, 3, 5 [27] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M ¨uller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod- els for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 , 2023. 3 [28] Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih- Yao Ma, Ching-Yao Chuang, David Yan, Dhruv Choudhary, Dingkang Wang, Geet Sethi, Guan Pang, Haoyu Ma, Ishan Misra, Ji Hou, Jialiang Wang, Kiran Jagadeesh, Kunpeng Li, Luxin Zhang, Mannat Singh, Mary Williamson, Matt Le, Matthew Yu, Mitesh Kumar Singh, Peizhao Zhang, Pe- ter Vajda, Quentin Duval, Rohit Girdhar, Roshan Sumbaly, Sai Saketh Rambhatla, Sam Tsai, Samaneh Azadi, Samyak Datta, Sanyuan Chen, Sean Bell, Sharadh Ramaswamy, Shelly Sheynin, Siddharth Bhattacharya, Simran Motwani, Tao Xu, Tianhe Li, Tingbo Hou, Wei-Ning Hsu, Xi Yin, Xi- aoliang Dai, Yaniv Taigman, Yaqiao Luo, Yen-Cheng Liu, Yi-Chiao Wu, Yue Zhao, Yuval Kirstain, Zecheng He, Zijian He,
https://arxiv.org/abs/2505.21036v1
Albert Pumarola, Ali Thabet, Artsiom Sanakoyeu, Arun Mallya, Baishan Guo, Boris Araya, Breena Kerr, Carleigh Wood, Ce Liu, Cen Peng, Dimitry Vengertsev, Edgar Schon- feld, Elliot Blanchard, Felix Juefei-Xu, Fraylie Nord, Jeff Liang, John Hoffman, Jonas Kohler, Kaolin Fire, Karthik Sivakumar, Lawrence Chen, Licheng Yu, Luya Gao, Markos Georgopoulos, Rashel Moritz, Sara K. Sampson, Shikai Li, Simone Parmeggiani, Steve Fine, Tara Fowler, Vladan Petro- vic, and Yuming Du. Movie gen: A cast of media foundation models, 2024. 1 [29] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ¨orn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. 3 [30] Amirmojtaba Sabour, Sanja Fidler, and Karsten Kreis. Align your steps: Optimizing sampling schedules in diffusion mod- els, 2024. 2 [31] Ethan Smith, Nayan Saxena, and Aninda Saha. Todo: Token downsampling for efficient generation of high-resolution im- ages. arXiv preprint arXiv:2402.13573 , 2024. 3 [32] Hanlin Tang, Yang Lin, Jing Lin, Qingsen Han, Shikuan Hong, Yiwu Yao, and Gongyi Wang. Razorattention: Ef- ficient kv cache compression through retrieval heads. arXiv preprint arXiv:2407.15891 , 2024. 3 [33] Yuchuan Tian, Zhijun Tu, Hanting Chen, Jie Hu, Chao Xu, and Yunhe Wang. U-dits: Downsample tokens in u-shaped diffusion transformers. arXiv preprint arXiv:2405.02730 , 2024. 3 [34] Chau Tran, Duy MH Nguyen, Manh-Duy Nguyen, TrungTin Nguyen, Ngan Le, Pengtao Xie, Daniel Sonntag, James Y Zou, Binh Nguyen, and Mathias Niepert. Accelerating trans- formers with spectrum-preserving token merging. Advances in Neural Information Processing Systems , 37:30772–30810, 2025. 3 [35] Hongjie Wang, Difan Liu, Yan Kang, Yijun Li, Zhe Lin, Ni- raj K Jha, and Yuchen Liu. Attention-driven training-free efficiency enhancement of diffusion models. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition , pages 16080–16089, 2024. 3 [36] Felix Wimbauer, Bichen Wu, Edgar Schoenfeld, Xiaoliang Dai, Ji Hou, Zijian He, Artsiom Sanakoyeu, Peizhao Zhang, Sam Tsai, Jonas Kohler, et al. Cache me if you can: Accel- erating diffusion models through block caching. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 6211–6220, 2024. 3 [37] Haoyu Wu, Jingyi Xu, Hieu Le, and Dimitris Samaras. Importance-based token merging for diffusion models. arXiv preprint arXiv:2411.16720 , 2024. 3 [38] Haocheng Xi, Shuo Yang, Yilong Zhao, Chenfeng Xu, Muyang Li, Xiuyu Li, Yujun Lin, Han Cai, Jintao Zhang, Dacheng Li, et al. Sparse videogen: Accelerating video diffusion transformers with spatial-temporal sparsity. arXiv preprint arXiv:2502.01776 , 2025. 2, 4, 6 [39] Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, Da Yin, Xiaotao Gu, Yuxuan Zhang, Weihan Wang, Yean Cheng, Ting Liu, Bin Xu, Yuxiao Dong, and Jie Tang. Cogvideox: Text-to-video diffusion models with an expert transformer, 2024. 3, 5 [40] Zhihang Yuan, Hanling Zhang, Pu Lu, Xuefei Ning, Lin- feng Zhang, Tianchen Zhao, Shengen Yan, Guohao Dai, and Yu Wang. Ditfastattn: Attention compression for diffusion transformer models, 2024. 2, 3, 5 [41] Stephen Zhang and Vardan Papyan. Oats: Outlier-aware pruning through sparse and low rank
https://arxiv.org/abs/2505.21036v1
arXiv:2505.21038v1 [math.CT] 27 May 2025Fixed-Point Traps and Identity Emergence in Educational Feedback Systems Faruk Alpay∗ May 28, 2025 Abstract I present a categorical framework for analyzing fixed-point emergence in educational feedback systems, where exam-grade collapse mechanisms prevent the formation of stable learner identities. Using endofunctors and initial algebras from category theory, I model learning processes as gen- erative functors φthat would naturally converge to fixed-point identities under transfinite iteration. However, when educational assessment intro- duces entropy-reducing collapse functors E, I prove that no nontrivial initial algebra can exist for the composite functor F=E◦φ. This math- ematical obstruction categorically blocks creativity-driven identity emer- gence, creating what I term ”fixed-point traps” in educational systems. My results demonstrate that exam-driven feedback loops fundamentally prevent the stabilization of learner identities, offering a formal explanation for creativity suppression in assessment-heavy educational environments. Keywords: Explainable AI, Category Theory, Fixed-Point Algebra, Exam Sys- tem Collapse, Education Trap, Grading Paradox, Categorical Identity, Symbolic Mathematics, Creativity Suppression, Alpay Algebra, φ-Algebra, Transfinite Fixed Point, Collapse Morphism, University Failure, Anti-Creativity Structures, Observer Collapse, Research Inhibition, Exam-Driven Learning, Categorical Ob- struction, Emergence Blocking MSC 2020: 18A15 (Foundations and basic properties of categories), 18C15 (Monads and comonads), 91D30 (Social choice theory), 97C70 (Teaching-learning processes), 03B70 (Logic in computer science), 68T01 (Foundations of artificial intelligence) ACM Classification: F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic—Categorical algebra; I.2.0 [Artificial Intelligence]: Gen- eral—Cognitive modeling; K.3.2 [Computers and Education]: Computer and Information Science Education—Symbolic systems, Feedback architectures ∗Independent Researcher, ORCID: 0009-0009-2207-6528 1 1 Preliminaries and Notation Definition 1.1 (Category and Identity Morphisms) .A category Cconsists of a class of objects Ob(C), a class of morphisms (arrows) Hom(C), source and target maps assigning to each morphism fits domain and codomain, a composition law, and for each object Xan identity morphism idXsatisfying idX◦f=fand g◦idX=gwhenever composable. These data satisfy associativity of composition (see [2]). Definition 1.2 (Endofunctor and φ-Algebra) .LetCbe a category. An endo- functor φ:C → C assigns to each object Xan object φ(X)and to each morphism f:X→Ya morphism φ(f) :φ(X)→φ(Y), preserving identities and compo- sition. A φ-algebra is a pair (X, α)where X∈Ob(C)andα:φ(X)→Xis a morphism in C. A morphism of φ-algebras (X, α)→(Y, β)is a map h:X→Y withh◦α=β◦φ(h). An initial φ-algebra is a φ-algebra (µφ, ι)such that for every (X, α)there is a unique φ-algebra homomorphism (µφ, ι)→(X, α). Definition 1.3 (Fixed-Point Object as Identity) .The transfinite φ-chain start- ing from an initial object 0(assumed in C) is defined by X0= 0,Xn+1=φ(Xn), and for limit ordinals λ,Xλ= colim γ<λXγ. If this chain converges at stage Λso that XΛ∼=φ(XΛ), the object XΛis called the fixed-point object or initial fixed point of φ. We denote this object by µφ. By definition µφ∼=φ(µφ), so µφis a (least) fixed point of φand is regarded as the categorical identity of the generative process φ. Lemma 1.1 (Lambek’s Lemma) .If(µφ, ι)is an initial φ-algebra, then the structure map ι:φ(µφ)→µφis an isomorphism. In particular, µφis a fixed point of φ(up to isomorphism). Equivalently, any initial φ-algebra is in fact a fixed-point algebra [2]. 2 2 Exam-Grade Collapse Systems Definition 2.1 (Fold/Collapse Morphism) .LetCbe as above, endowed with
https://arxiv.org/abs/2505.21038v1
a complexity measure h: Ob(C)→Ord (an ”entropy”). A morphism f: X→YinCis called a fold or entropy-reducing collapse if fis an epimorphism (surjective on structure) that is not invertible, and h(Y)< h(X)(sofidentifies distinct substructures of XinY). For example, in the observer-coupled collapse of [4], the perturbed identity contains two copies of the core identity and thus is collapsed back to the original; such a canonicalization is a fold. Definition 2.2 (Exam-Grade Collapse System) .An Exam-Grade Collapse Sys- tem (EGCS) consists of a category C(with an initial object) together with: •a generative endofunctor φ:C → C (modeling creative learning/update), and •a collapse endofunctor E:C → C (modeling the exam+grading process), plus a natural transformation ε:φ⇒E◦φwhose component εX:φ(X)→ E(φ(X))is a fold for every object X. Thus each examination step applies φand then collapses via E, strictly reducing entropy. We require h(E(φ(X)))< h(φ(X))for all X, so the exam functor E always maps states into strictly ”lower-entropy” subspaces. Proposition 2.1 (Entropy Reduction) .In an EGCS, for every object Xthe composite morphism φ(X)εX− − →E(φ(X)) is a fold and hence h(E(φ(X)))< h(φ(X)). In particular each exam collapses the state space. Proof. By Definition 2.2, εXis an epi that is not invertible, so by definition a fold. By assumption h(E(φ(X)))< h(φ(X)).□ 3 3 Fixed-Point Trap in EGCS Theorem 3.1 (Nonexistence of Nontrivial Fixed-Point) .In an EGCS, the com- posite functor F=E◦φadmits no nontrivial initial algebra. Equivalently, there is no object X̸∼=0satisfying X∼=F(X)(fixed point) except the trivial initial object. Proof. Suppose for contradiction that ( µ, ι) is an initial F-algebra, i.e. ι:F(µ) = E(φ(µ))→µis initial. By Lambek’s lemma, ιis an isomorphism and µ∼= F(µ) =E(φ(µ)). But since εµ:φ(µ)→E(φ(µ)) is a fold, E(φ(µ)) has strictly lower entropy than φ(µ). Thus φ(µ)̸∼=E(φ(µ)) unless µis degenerate (initial). In particular, E(φ(µ)) cannot be isomorphic to µunless µ∼=0. Hence the assumed isomorphism µ∼=E(φ(µ)) fails. This contradiction shows no nontrivial initial F-algebra can exist. □ Corollary 3.1 (Identity Emergence Blocked) .Because no initial F-algebra ex- ists in the EGCS, there is no emergent ”identity object” µFthat is fixed by F. In particular, the process φcannot produce its own identity via transfinite iteration once the exam collapse is enforced. Equivalently, there is no universal fixed- point object in Cunder F, so the category of F-algebras has no terminal/initial object to play the role of a stabilized identity. Proof. Immediate from Theorem 3.1 and the definition of identity-as-fixed- point. Indeed, in ordinary [3] the identity of φis given by the initial φ-algebra (the colimit of the chain), whose existence is now precluded. Without µF, there is no canonical identity morphism in the emergent algebra of states; in fact, in [3] it was shown that the unique homomorphism ! : µφ→X(for any φ-algebra (X, α)) realizes the ”generative identity” on X. Since µFdoes not exist, that identity map cannot be defined. □ 4 4φ-Emergence and Creativity Blockage Definition 4.1 (φ-Emergence of Identity) .We say the endofunctor φexhibits φ-emergence of identity if the transfinite iteration of φconverges to an initial algebra µφwhich serves as its identity (as in [3]). In the absence of collapse, µφis the unique object with
https://arxiv.org/abs/2505.21038v1
φ(µφ)∼=µφ. Theorem 4.1 (Categorical Blocking of Creativity) .In an EGCS, any φ-emergence of identity is categorically blocked. That is, even for a ”creativity-driven” func- torφ, the required fixed-point object µφcannot form because the exam-induced collapse intervenes. Hence creativity-driven emergence of φ’s identity is impos- sible. Proof. By [3], the essence of identity emergence is that under transfinite itera- tion of φ, a unique fixed point µφappears as the initial φ-algebra. But in the EGCS the actual process is controlled by F=E◦φ. Theorem 3.1 showed that Fhas no nontrivial initial algebra, so φcannot reach its would-be fixed point. Concretely, each step φ(X) is immediately collapsed by Einto a lower-entropy state. By analogy to [4], an ”observer-coupled collapse” repeatedly injects re- dundant structure (copying the identity into itself); here exams play the role of the observer, permanently perturbing and collapsing the learner’s state. Thus any candidate identity φ-algebra never stabilizes. In summary, the universal invariant fixed point of φ(the creative identity) is prevented from emerging by the entropy-reducing folds. □ Remark. My construction and proofs use only standard categorical notions (categories, functors, initial algebras) in the Bourbaki–Mac Lane tradition [1, 2]. The key observation is that exam+grading constitute a functorial collapse that violates the usual convergence conditions for initial algebras. This furnishes a purely mathematical fixed-point trap: the system has universal colimits and initial objects, yet every generative chain is pulled into a low-entropy state that lacks a new identity. Consequently, creativity-driven φ-emergence is categori- cally obstructed. 5 5 Conclusion We have established a categorical framework demonstrating that exam-grade collapse systems fundamentally obstruct the formation of stable learner identi- ties. The mathematical core lies in proving that composite functors F=E◦φ cannot admit nontrivial initial algebras when the exam functor Eenforces entropy-reducing collapses. This result provides a formal foundation for understanding creativity suppres- sion in assessment-heavy educational environments. The fixed-point trap mech- anism shows that repeated examination processes prevent the natural conver- gence of learning dynamics to stable identity states, suggesting that alternative assessment approaches may be necessary to support creative development. Future work will extend this framework to analyze specific educational inter- ventions and explore categorical conditions under which identity emergence can be preserved despite evaluative pressures. References [1] N. Bourbaki. ´El´ ements de math´ ematique. Th´ eorie des ensembles . Hermann, Paris, 1970. [2] S. Mac Lane. Categories for the Working Mathematician . Graduate Texts in Mathematics. Springer-Verlag, New York, 2nd edition, 1998. [3] F. Alpay. Alpay algebra II: Identity as fixed-point emergence in categorical data. arXiv preprint arXiv:2505.17480 [math.CT] , 2025. https://arxiv. org/abs/2505.17480 . [4] F. Alpay. Alpay algebra III: Observer-coupled collapse and the temporal drift of identity. arXiv preprint arXiv:2505.19790 [math.CT] , 2025. https: //arxiv.org/abs/2505.19790 . 6
https://arxiv.org/abs/2505.21038v1
FCKT: Fine-Grained Cross-Task Knowledge Transfer with Semantic Contrastive Learning for Targeted Sentiment Analysis Wei Chen1,Zhao Zhang2,Meng Yuan1,Kepeng Xu3andFuzhen Zhuang1,4† 1School of Artificial Intelligence, Beihang University, China 2School of Computer Science and Engineering, Beihang University, China 3Xidian University, China4Zhongguancun Laboratory, China {chenwei23, zhuangfuzhen }@buaa.edu.cn Abstract In this paper, we address the task of targeted senti- ment analysis (TSA), which involves two sub-tasks, i.e., identifying specific aspects from reviews and determining their corresponding sentiments. As- pect extraction forms the foundation for sentiment prediction, highlighting the critical dependency between these two tasks for effective cross-task knowledge transfer. While most existing studies adopt a multi-task learning paradigm to align task- specific features in the latent space, they predom- inantly rely on coarse-grained knowledge trans- fer. Such approaches lack fine-grained control over aspect-sentiment relationships, often assuming uni- form sentiment polarity within related aspects. This oversimplification neglects contextual cues that dif- ferentiate sentiments, leading to negative transfer. To overcome these limitations, we propose FCKT, a fine-grained cross-task knowledge transfer frame- work tailored for TSA. By explicitly incorporat- ing aspect-level information into sentiment predic- tion, FCKT achieves fine-grained knowledge trans- fer, effectively mitigating negative transfer and en- hancing task performance. Experiments on three datasets, including comparisons with various base- lines and large language models (LLMs), demon- strate the effectiveness of FCKT. The source code is available on https://github.com/cwei01/FCKT. 1 Introduction Unlike traditional sentiment analysis tasks [Liu, 2012 ], aspect-based sentiment analysis (ABSA) [Wang et al. , 2016; Tang et al. , 2020 ]requires a deeper contextual understanding and the extraction of more fine-grained information. ABSA has been widely applied in tasks such as customer feedback analysis and academic paper review evaluation [Nath and Dwivedi, 2024 ]. ABSA typically involves identifying (1) aspects (entities or attributes discussed), (2) opinions (sen- timent expressions), and (3) sentiment polarity. However, re- search indicates that over 30% of sentiment expressions are implicit [Caiet al. , 2021; Chen et al. , 2024a ], meaning the †Corresponding author. The lens and autofocus are impressive, except low-light capability.lens autofocuslow-lightEmbedding SpaceAspectsAspectSentimentOutput ResultsCoarse-grained TransferFine-grained TransferA:( lens, autofocus,low-light capability)S:(pos,pos,neg)✓A:( lens, autofocus,low-light capability)S:(pos,pos,pos)Pull①②③①②③capabilitycamera-related✗Figure 1: Illustration comparing coarse-grained and fine-grained transfer methods. Coarse-grained transfer assumes uniform senti- ment polarity for related aspects, while fine-grained transfer cap- tures nuanced aspect-sentiment relationships for precise alignment. ABSA must infer aspects and sentiments from the context without predefined opinions. In this paper, we address these challenges by developing targeted sentiment analysis (TSA). It involves two sub-tasks, i.e., identifying relevant aspects from reviews and explicitly associating them to their corre- sponding sentiments [Chen et al. , 2022b; Zhou et al. , 2024 ]. When TSA was first proposed, a two-stage pipeline method was developed [Huet al. , 2019; Kalbhor and Goyal, 2023 ]. However, this approach often lead to error propagation be- tween sub-tasks. Moreover, aspects are intrinsically linked with sentiment, and this staged pipeline approach disrupts this crucial interaction [Xuet al. , 2021 ]. To address these is- sues, many end-to-end approaches have been proposed [Chen et al. , 2022a; Li et al. , 2023; Zhu et al. , 2024 ], aiming to jointly extract aspect
https://arxiv.org/abs/2505.21040v2
terms and classify their sentiments by modeling the interactions between aspects and sentiments. A widely studied technique in this field is the task-specific fea- ture alignment approach [Chen et al. , 2024a ]. This method involves two key steps: first, encoding task-specific features for both aspects and sentiments; second, aligning these fea- tures in the latent space, even when some aspects differ in sentiments, to enhance cross-task knowledge interaction. Despite recent advancements, these methods still rely on coarse-grained knowledge transfer, which fails to provide ex- plicit and fine-grained control over aspect-sentiment relation- ships. This limitation often leads to negative transfer, as coarse-grained approaches assume uniform sentiment polar-arXiv:2505.21040v2 [cs.CL] 28 May 2025 ity across related aspects, ignoring key contextual differences. For instance, as illustrated in Figure 1, coarse-grained meth- ods treat all camera-related aspects (“lens”, “autofocus”, and “low-light capability”) share the same sentiment representa- tion. Consequently, these methods incorrectly classify “low- light capability” as positive, overlooking the negative sen- timent implied by the contrastive cue “except”. This mis- alignment, stemming from the lack of explicit control over knowledge transfer, is particularly problematic when aspect- sentiment relationships are subtle or context-dependent. Along this research line, explicit fine-grained knowledge transfer has emerged as a promising solution to enhance the interaction between aspects and sentiments. It disentangles aspect-sentiment relationships and enables dynamic align- ment tailored to each aspect. By isolating sentiment repre- sentations for each aspect, it accurately associates “lens” and “autofocus” with positive sentiment and “low-light capabil- ity” with negative sentiment, leveraging contextual cues like the contrast implied by “except”. This ensures more precise and context-aware sentiment classification. However, despite its potential, explicit fine-grained cross-task knowledge trans- fer faces the more severe challenge of error accumulation that must be addressed: (1) In fine-grained knowledge transfer, the performance of tasks such as aspect extraction and sen- timent classification is highly interdependent, as information flows directly between tasks. While leveraging aspect extrac- tion to assist sentiment classification fosters cross-task col- laboration, it also introduces the risk of error propagation. Errors or ambiguities in aspect extraction, such as incorrectly identified or missing aspects, are directly transferred to the sentiment classification phase, compounding inaccuracies in the final predictions. To address this, a more effective ap- proach is required to constrain aspect extraction, ensuring more accurate identification and minimizing the influence of ambiguity. (2) Moreover, even with perfect aspect extraction, fine-grained knowledge transfer may still struggle due to in- sufficient supervisory signals for sentiment classification. As the sentiment classifier relies heavily on aspect extraction, its training can become overly dependent on these features, po- tentially overlooking other crucial contextual cues that are not explicitly tied to aspect information. This lack of comprehen- sive supervision can hinder the classifier’s ability to general- ize effectively. This underscores the need for balanced, holis- tic supervision to optimize fine-grained knowledge transfer. To address these challenges, we propose FCKT, a fine- grained cross-task knowledge transfer framework tailored for the TSA task. Specifically, we design a token-level seman- tic contrastive learning mechanism. In this approach, the start and end tokens of the same aspect are treated
https://arxiv.org/abs/2505.21040v2
as positive pairs, while tokens from unrelated aspects serve as negative pairs. This refines the model’s understanding of aspects and enhances its ability to capture subtle contextual dependen- cies. To mitigate the lack of supervisory signals in the sen- timent classifier, we introduce an alternating learning strat- egy. This approach trains a fixed proportion of samples using real labels, while the remaining samples are updated based on predictions from the previous model. This enables FCKT to leverage both real labeled information and predicted trans- fer knowledge, synergistically improving the effectiveness ofTSA. Furthermore, while Large Language Models (LLMs) like GPT-3.5 and GPT-4 excel in many general-purpose sce- narios, they often struggle to adapt to task-specific challenges in fine-grained TSA tasks. Our proposed framework sur- passes these models in both few-shot learning and chain- of-thought reasoning scenarios by leveraging task-specific knowledge transfer mechanisms and tailored training strate- gies. These results highlight the capability of FCKT to ad- dress the unique complexities of TSA tasks, providing valu- able insights for advancing task-specific methods in the LLM era. Our key contributions are summarized as follows: • We propose a token-level semantic contrastive learning mechanism that enhances the model’s understanding of as- pects by treating start and end tokens as positive pairs and unrelated tokens as negative pairs, enabling it to effectively capture subtle contextual dependencies. • To address insufficient supervisory signals in sentiment classification, we introduce an alternating learning strategy integrates real labeled data with predicted transfer knowl- edge, significantly improving task-specific performance. • We conduct extensive experiments on three real-world datasets, demonstrating the superiority of FCKT over ex- isting approaches. We also demonstrate the effectiveness of our approach across various LLM models, showcasing its superior performance in the era of LLMs. 2 Related Work Targeted Sentiment Analysis. Targeted sentiment analy- sis (TSA) involves extracting aspect terms and their asso- ciated sentiment polarities, a key focus in recent research. Early work approached aspect extraction and sentiment anal- ysis as separate tasks, employing methods such as Long Short-Term Memory (LSTM) networks [Luo et al. , 2019 ] and Bidirectional Encoder Representations from Transform- ers (BERT) [Yang et al. , 2020 ]. Recent studies have shifted toward end-to-end solutions using multi-task learning frame- works [Luo et al. , 2019; Lin and Yang, 2020; Chen et al. , 2022b ], highlighting the interdependence between aspects and sentiments. Techniques such as shared-private feature in- teraction [Lin and Yang, 2020 ]and task-regularization inter- action [Chen et al. , 2022b ]have been introduced to improve performance by leveraging cross-task knowledge transfer. However, current approaches mainly rely on coarse-grained alignment approches , which lacks fine-grained control over aspect-sentiment relationships. This often leads to negative transfer, as coarse-grained approaches assume uniform senti- ment polarity within related aspects, overlooking contextual cues that differentiate sentiments. More recently, large lan- guage models (LLMs) like GPT have demonstrated strong generalization capabilities, outperforming fine-tuned BERT in some low-resource ABSA scenarios [Wang et al. , 2023; Zhou et al. , 2024 ]. Nevertheless, they continue to under- perform compared to state-of-the-art task-specific models, underscoring the need for further adaptation to
https://arxiv.org/abs/2505.21040v2
address the unique challenges of resource-constrained TSA tasks. Contrastive Learning. Contrastive learning has recently achieved significant success in various domains [Jaiswal et al., 2020; Zhang et al. , 2022; Luo et al. , 2024; Zhong et al. , 𝒙𝟎𝒙𝟐𝒙𝟏𝒙𝟓𝒙𝟑𝒙𝟒𝒉𝟎𝒉𝟑𝒉𝟐𝒉𝟏𝒉𝟒𝒉𝟓𝒉𝟐𝒉𝟏ExtractorMLP 𝓕𝒉𝟎𝒉𝟏 0.20.40.10.10.10.10.20.10.40.10.10.1...POSNEUNEGClassifer𝒉𝟐𝒉𝟏𝒉𝟐𝒉𝟑𝒉𝟒 Thescreensizeissatisfactory.𝒉𝟎𝒉𝟐𝒉𝟐𝒉𝟑𝒉𝟒PushPushPull 0.1………0.20.2………0.1PullToken-Level Contrastive LearningEnd PositionStart PositionStart scoresEnd scores 𝜁⨂Ε𝑝̂!𝑝̂"𝑯𝑯⨂Fine-Grained Knowledge ModelingAESP ratio1−𝜁+Figure 2: A depiction of the peopose FCKT framework. 2024b; Zhong et al. , 2024a; Zhong et al. , 2025; Yuan et al. , 2025a; Yuan et al. , 2025b; Yuan et al. , 2023 ]. In ABSA, several studies have explored its integration into model train- ing to improve performance. For example, Liang [Liang et al., 2021 ]leveraged contrastive learning to distinguish be- tween aspect-invariant and aspect-dependent features, lead- ing to better sentiment classification. Similarly, Xiong [Xiong et al. , 2022 ]proposed a triplet contrastive learning network that combines syntactic and semantic information via an aspect-oriented sub-tree and sentence-level contrastive learn- ing. Chen [Chen et al. , 2024a ]further utilized contrastive learning to model adaptive task-relatedness between aspect and sentiment co-extraction. Different from these works, we leverage contrastive learning specifically for aspect extraction to enhance knowledge transfer in TSA. 3 Methodology In this section, we will introduce proposed FCKT framework in detail, which is depicted in Figure 2. The comprehensive task consists of two base components: first, extracting opin- ion aspects (AE), and second, predicting the sentiment polar- ity (i.e., positive, neutral, or negative) of each aspect (SP). 3.1 Problem Description Formally, given a sentence x={x0, x1, . . . , x n−1}, where xirepresents the ithtoken in xandndenotes the sentence length, the main objective of FCKT is to extract all possible aspects and predict their corresponding sentiments in x. To enhance the representation of contextual semantics, we leverage the widely recognized BERT model [Devlin et al. , 2019 ], an effective multi-layer bidirectional Transformer de- signed to construct rich contextual embeddings by simultane- ously capturing both left and right word dependencies. BERT comprises a series of TTransformer layers. The working principle of BERT is summarized as follows: H0=WiE+P,Ht=T Ht−1 , t∈[1, T], (1) The screen size is satisfactory but the phone battery capacity is limited.(screen size , pos)OriginalTraining SampleOriginalLabel(1).The screen size is satisfactory but the phone battery capacity is limited.Aspect-wise sentence splittingNew Training SamplesNewLabel(2).The screen size is satisfactory but the phone battery capacity is limited.(screen size , pos)(phone battery capacity , neg)(phone battery capacity , neg)① ①② ①Figure 3: An example of the sentence splitting process. where Wiis the one-hot representation of the sub-word in the input sentence, Edenotes the sub-word embedding ma- trix,Prepresents the positional embedding, Tis the Trans- former block, and Htis the output of the tthTransformer layer. For the given sentence x={x0, x1, . . . , x n−1}, we as- sume the output of the final Transformer layer is H=HT= {h0,h1, . . . ,hn−1} ∈Rn×d, where dis the embedding size. Remark. In our modeling process, sentences containing multiple aspect terms are split into separate sentences dur- ing training, with each focusing on a single aspect, as illus- trated in Figure 3. This preprocessing
https://arxiv.org/abs/2505.21040v2
step allows the model to effectively learn aspect-specific representations by isolat- ing each aspect, avoiding interference from others in the same sentence. It is important to note that this splitting strategy is only applied during training to facilitate the learning process. During testing, the original sentence structure is preserved, ensuring consistency in evaluation and practical application. Our approach, FCKT, leverages this strategy without alter- ing the original optimization objectives or knowledge trans- fer mechanisms. Further details are provided in the Ap- pendix A , where we demonstrate that this strategy maintains optimization consistency while enabling fine-grained model- ing of aspect-specific interactions. 3.2 Semantic Contrastive Learning for AE In this article, aspects are detected by predicting their start and end boundaries through a linear transformation opera- tion [Lin and Yang, 2020 ]. Specifically, the start and end boundary distributions are predicted as follows: ˆps=Fϕ1(H),ˆpe=Fϕ2(H), (2) where Fϕ1andFϕ2are MLP layers used for extracting as- pects. The length of ˆpsmatches that of the sentence, and each element in ˆpsrepresents the probability of a word being the start of an aspect. It is important to note that this step only predicts the boundary distributions of aspects. To determine the final aspects, a heuristic extraction algorithm [Huet al. , 2019 ]is subsequently employed. During optimization, the learning objective for boundary distributions is formulated as: Lae=−nX i=1{pT i,slog (ˆpi,s) +pT i,elog (ˆpi,e)}, (3) where pT i,s∈RnandpT i,e∈Rnare the boundary ground truths (i.e., 0-1 vectors), ˆpi,sandˆpi,eare the predicted bound- ary distributions, nis the length of sentence. Compared to conventional coarse-grained transfer meth- ods[Chen et al. , 2024a; Sun et al. , 2024 ], fine-grained knowl- edge transfer imposes stricter requirements on the precision of aspect extraction. Any incorrectly identified or missing as- pects are directly transferred to the sentiment classification phase, accumulating prediction inaccuracies. To mitigate this issue, we introduce a token-level semantic contrastive learn- ing framework to improve aspect representation quality. Intuitively, tokens within an aspect inherently exhibit strong semantic compatibility, as they frequently co-occur in similar contexts. For instance, as illustrated in Figure 2, the end boundary distribution suggests that both the 2nd and 3rd words have comparable probabilities of being the end of an aspect. However, from a semantic perspective, the word “is” is unrelated to the preceding words and is therefore less likely to be part of the aspect boundary. Semantic information can serve as a complementary signal to boundary distributions by verifying whether different tokens form a “semantically rea- sonable” aspect. To leverage this property, we treat the start and end tokens of the same aspect as positive pairs, encourag- ing their representations to be closely aligned. For negative pairs, we construct two types: (1) the start token of the as- pect paired with end tokens from other, unrelated aspects, and (2) the end token of the aspect paired with start tokens from unrelated aspects. The InfoNCE loss [Chen et al. , 2024b; Chen et al. , 2024c ]is then employed to optimize these token- level embeddings. Formally, this loss is defined as: Lcl=−X (s,e)∈Elogexp(s(hs,he)/τ)Pexp(s(hs,hi)/τ)+exp( s(he,hj)/τ), (4) where (hs,he)are positive pair embeddings, while (hs,hi)
https://arxiv.org/abs/2505.21040v2
and(he,hj)are negative pairs. τis the temperature parame- ter,s(·,·)denotes the cosine similarity function, and Erepre- sents the set of all positive pairs in each sentence. 3.3 Fine-Grained Knowledge Transfer for SP In sentiment prediction task, the sentiment polarity for a given aspect is determined based on the words within its bound- aries. Let the start and end boundaries be denoted as sande, respectively. The sentiment is then predicted as: ˆy(ψ) =Cθ(Hs→e), (5) where ˆy(ψ)represents the predicted class probability distri- bution, Cθis a nonlinear classifier that maps input features to the range [0,1], effectively modeling the likelihood of each sentiment class. Hs→edenotes the aspect representation, ag- gregated from the start boundary sto the end boundary e. As previously mentioned in Section 1 , these two sub-tasks exhibit a logical sequential dependence, implying that aspects can provide valuable signals for sentiment prediction. How- ever, sentiment prediction in previous coarse-grained trans- fer methods relies on ground truth boundaries, which essen- tially disconnects the sequential connections between these two sub-tasks. Therefore, we employ aspect features to su- pervise the sentiment prediction process. In this way, aspect extraction can provide sequentially useful supervised signals for sentiment prediction, while sentiment information can in- fluence aspect detection through backpropagation.To achieve this, we refrain from using the ground truth boundaries as input. Instead, we derive an expectation of word embedding based on the boundary distributions. Con- sequently, Eq. (5) can be redefined as: ˆy(ℓ) =Cθ E(i∼ˆps,j∼ˆpe)Hi→j , (6) where Edenotes the expectation operator over the indices i andj, which are sampled from the predicted start boundary distribution ˆpsand the end boundary distribution ˆpe, respec- tively. Furthermore, since ˆpsandˆpeare both discrete distri- butions, we have the following derivation: ˆy(ℓ) =Cθ Ei∼ˆps" Ej∼ˆpe"jX k=iHk##! =Cθ Ei∼ˆps nX j=1ˆpe,jjX k=iHk   =Cθ nX i=1nX j=1ˆps,i·ˆpe,jjX k=iHk ,(7) where we assume nis the sentence’s length. In this way, these two sub-tasks enable genuine end-to-end learning. In practice, to compute Eq. (7) , one needs to traverse the whole sentence, and the computational complexity is O(n2), which is very difficult to calculate. Fortunately, this complex- ity can be reduced based on two facts: (i) The end point is not smaller than the start point. (ii) The length of an aspect term is usually not large. We then revise ˆy(ψ)toˆy(ℓ), as follows: ˆy(ℓ) =Cθ nX i=1i+hX j=iˆpi,s·ˆpe,jjX k=iHk , (8) where his the maximum length of aspect. Thus, the compu- tation complexity is reduced to O(nh)(i.e.,h≪n). Actually, in the testing phase, the sentiment is inferred based on the predicted aspects, which is more aligned with our fine-grained transfer modeling. By inputting the distri- butional boundaries, the sentiment information can be back propagated to supervise the aspect detection process, and en- hance its accuracy. However, if we only base on the distribu- tional input, we may still face challenges due to insufficient supervisory signals for sentiment classifier. In such a sce- nario, providing some ground truth data can reduce the error propagation path, and thus alleviate this problem. Specifi- cally, during training, we use an alternate training strategy: a fixed ratio ( ξ) of samples are trained
https://arxiv.org/abs/2505.21040v2
based on Eq. (5), while the remaining samples ( 1−ξ) are optimized using Eq. (8). The model parameters are optimized through the following cross-entropy loss formulation [Chen et al. , 2024a ]: Lsp=−NX i=1KX j=1yi,jlog (ξ·ˆy(ψ)i,j+ (1−ξ)·ˆy(ℓ)i,j), (9) where Nrepresents the number of samples, Kindicates the number of sentiment types, ˆy(ψ)andˆy(ℓ)are predicted sen- timent distributions, and yis the ground truth. This strategy effectively combines real labels with transferred knowledge, leveraging their synergistic integration to enhance FCKT. Algorithm 1: Training Framework Input: Training dataset D, learning rate α, sampling ratioξ, model parameters Θ ={ϕ1,ϕ2,θ} while the convergence criterion is not met do foreach batch Bin Dataloader( D)do Compute ˆpsandˆpeusing Eq. (2); Evaluate aspect extraction loss Laevia Eq. (3); Evaluate contrastive loss Lclvia Eq. (4); Sample a random value p∼Uniform (0,1); ifp > ξ then Construct predicted E(i∼ˆps,j∼ˆpe)Hi→j; Compute the predicted ˆy(ℓ)via Eq. (8); else Select real aspect embedding Hs→e; Compute the real ˆy(ψ)via Eq. (5); Evaluate sentiment loss Lspvia Eq. (9); Aggregate total loss L=Lae+Lsp+λLcl; Update parameter Θ = Θ −α∇ΘL. 3.4 Model Optimization Overall Training. To enhance the performance of FCKT framework, we jointly optimize the aspect extraction loss Lae, sentiment prediction loss Lsp, and contrastive loss Lcl. The combined objective function is defined as: L=Lae+Lsp+λLcl, (10) where λis a trade-off parameter for contrastive loss. The entire optimization process is illustrated in Algorithm 1, with all parameters trained in an end-to-end manner. Complexity Analysis. We further analyze the time and space complexities of FCKT. The time complexity is composed of three main components: (1) the aspect extraction mod- ule, with a complexity of O(n+n2)per training epoch; (2) the sentiment prediction module, with a complexity of O(h); and (3) the expectation input module, with a complexity of O(nh). Consequently, the overall time complexity of FCKT isO(n2+nh). In terms of space complexity, the model incurs costs primarily from word embeddings and the con- trastive learning process, resulting in a total space complexity ofO(nd+n2), where drepresents the embedding size. 4 Experiments In this section, we conduct experiments to answer the follow- ing five research questions: RQ1: How does the proposed FCKT perform overall compared to baseline approaches? RQ2: How effectively does FCKT perform across different sub-tasks? RQ3: How do the various components of FCKT contribute to the final results? RQ4: How do different pa- rameter configurations in the proposed method impact its per- formance? RQ5: Does fine-grained knowledge transfer help address the aforementioned challenges effectively? 4.1 Experimental Setup Datasets . To fairly assess our proposed FCKT, we conducted our experiments using three public datasets, Laptop, Restau- rant, and Tweets. The data statistics are presented in Table 1. More dataset details are shown in Appendix B.Dataset #Sentences #Aspects #+ #- #0 Laptop 1869 2936 1326 900 620 Restaurant 3900 6603 4134 1538 931 Tweets 2350 3243 703 274 2266 Table 1: Statistics of three datasets. “+/-/0” denote the positive, negative, and neutral sentiment polarities, respectively. Baselines . We compare three categories of models: Pipeline- based models ,End-to-End models ,LLM-based models . More baseline details are listed in Appendix C. Evaluation . We adopt three widely used
https://arxiv.org/abs/2505.21040v2
metrics, precision, recall, and F1 score , to evaluate the FCKT. Please refer to Appendix D for more metric details. Implementation Details . All the parameter details are com- prehensively provided in Appendix E for further reference. 4.2 Main Experimental Results (RQ1) We conduct experiments on three public datasets and the comparison results between FCKT and the baselines are pre- sented in Table 2. This table shows the following: Among the baselines, we find that our proposed FCKT consistently achieves the best results in all cases on F1 score. More precisely, when compared to the state-of-the-art ap- proach AIFI, FCKT demonstrates notable enhancements in F1 score performance across three distinct datasets, with an average improvement of approximately 1.38%1. These find- ings strongly suggest that the meticulously crafted FCKT exhibits a potential for superior performance. This can be attributed to two primary factors: first, the inclusion of a fine-grained knowledge transfer framework that effectively captures mutual information between sub-tasks, and second, the introduction of a token-level contrastive learning mech- anism that facilitates the extraction of more fitting aspects. However, the recall of FCKT is lower than several baseline models. One possible reason is that our heuristic extraction method filters out semantically irrational aspects based on the cumulative start/end scores, which may inadvertently exclude some false negatives, leading to a lower recall score. FCKT demonstrates notable advantages in both perfor- mance and computational efficiency compared to advanced LLMs. As illustrated in Tables 2, even the state-of-the-art LLM, GPT-4, with its vast number of parameters, fails to deliver satisfactory results for TSA, despite leveraging Few- Shot learning and Chain-of-Thought (CoT) [Weiet al. , 2022 ] enhancement. Moreover, the use of LLMs incurs substantial computational overhead, further underscoring the efficiency of our approach. This suggests that improving performance in these low-resource domains with LLMs remains a chal- lenging task. Despite advancements in LLMs, some tradi- tional methods for TSA are still valuable, offering insights that complement the limitations of current techniques. 4.3 Analysis on Both Sub-Tasks (RQ2) To verify model’s performance in individual tasks, we con- duct a comparative analysis tailored for both sub-tasks in Ta- 1Noted TSA remains a challenging task with minimal progress in recent research, making a 1.38% improvement significant. ModelLaptop Restaurant Tweets Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 PipelineCRF-Pipeline†[Mitchell et al. , 2013 ] 0.5969 0.4754 0.5293 0.5228 0.5101 0.5164 0.4297 0.2521 0.3173 NN-CRF-Pipeline†[Zhang et al. , 2015 ]0.5772 0.4932 0.5319 0.6009 0.6193 0.6100 0.4371 0.3712 0.4006 TAG-Pipeline†[Huet al. , 2019 ] 0.6584 0.6719 0.6651 0.7166 0.7645 0.7398 0.5424 0.5437 0.5426 SPAN-Pipeline†[Huet al. , 2019 ] 0.6946 0.6672 0.6806 0.7614 0.7334 0.7492 0.6072 0.5502 0.5769 End-to-EndSPJM†[Zhou et al. , 2019 ] 0.6140 0.5820 0.5976 0.7620 0.6820 0.7198 0.5484 0.4844 0.5144 SPAN-Joint†[Huet al. , 2019 ] 0.6741 0.6199 0.6459 0.7232 0.7261 0.7247 0.5703 0.5269 0.5455 S-AESC†[Lvet al. , 2021 ] 0.6687 0.6492 0.6588 0.7826 0.7050 0.7418 0.5586 0.5374 0.5473 HI-ASA‡[Chen et al. , 2022b ] 0.6796 0.6625 0.6709 0.7915 0.7621 0.7765 0.5732 0.5622 0.5676 DCS‡[Liet al. , 2023 ] 0.6812 0.6640 0.6725 0.7835 0.7751 0.7793 0.5862 0.5835 0.5848 MiniConGTS‡[Sunet al.
https://arxiv.org/abs/2505.21040v2
, 2024 ] 0.7206 0.6725 0.6957 0.7926 0.7952 0.7939 0.6164 0.5728 0.5938 PDGN‡[Zhuet al. , 2024 ] 0.7025 0.6812 0.6921 0.8036 0.7985 0.8010 0.6235 0.5924 0.6076 AIFI‡[Chen et al. , 2024a ] 0.7105 0.6915 0.7009 0.7925 0.8034 0.7979 0.6342 0.5911 0.6119 LLM-basedGPT-3.5-turbo Zero-Shot‡0.3462 0.4065 0.3739 0.6221 0.6605 0.6407 0.3750 0.2868 0.3250 GPT-3.5-turbo Few-Shot‡0.3389 0.4452 0.3855 0.5847 0.6605 0.6203 0.3812 0.2841 0.3256 GPT-3.5-turbo CoT‡0.3430 0.4581 0.3923 0.6624 0.6420 0.6520 0.4233 0.2752 0.3335 GPT-3.5-turbo CoT + Few-Shot‡0.3532 0.4581 0.3989 0.6215 0.6790 0.6490 0.3752 0.3012 0.3342 GPT-4o Zero-Shot‡0.3214 0.4065 0.3590 0.6242 0.5741 0.5981 0.2326 0.3704 0.2857 GPT-4o Few-Shot‡0.3299 0.4194 0.3693 0.6571 0.5679 0.6093 0.2532 0.3631 0.2984 GPT-4o CoT‡0.3371 0.3806 0.3576 0.6643 0.5741 0.6159 0.2823 0.3891 0.3272 GPT-4o CoT + Few-Shot‡0.3622 0.4323 0.3941 0.6842 0.6420 0.6624 0.4022 0.3842 0.3930 Ours FCKT 0.7599 0.6740 0.7144∗0.8449 0.7877 0.8153∗0.6512 0.5962 0.6225∗ Table 2: The overall performance comparison is conducted on three real-world datasets. The “ †” denotes results directly taken from the original papers, while “ ‡” indicates results reproduced following the methods described in the original papers. Bold values highlight the best performance, and ∗indicates statistical significance with a p-value ≤0.05compared to the best-performing baseline. Task Method Laptop Restaurant Tweets AEHI-ASA(COLING22) 0.8424 0.8511 0.7546 DCS(EMNLP23) 0.8455 0.8462 0.7543 MiniConGTS(EMNLP24) 0.8374 0.8432 0.7514 PDGN(ACL24) 0.8485 0.8546 0.7593 GPT-4o CoT + Few-Shot 0.5120 0.6010 0.4724 AIFI(AAAI24) 0.8511 0.8631 0.7634 FCKT 0.8534 0.8685 0.7754 SPHI-ASA(COLING22) 0.8531 0.9257 0.8451 DCS(EMNLP23) 0.8467 0.9212 0.8429 MiniConGTS(EMNLP24) 0.8541 0.9194 0.8349 PDGN(ACL24) 0.8519 0.9224 0.8496 GPT-4o CoT + Few-Shot 0.7045 0.7538 0.6323 AIFI(AAAI24) 0.8594 0.9305 0.8496 FCKT 0.8612 0.9310 0.8524 Table 3: The performance comparisons with different methods on aspect extraction (F1 score) and sentiment prediction (accuracy). ble 3. In AE, our proposed FCKT is able to achieve the best overall performances in terms of F1 score. Compared with the other two datasets, the improvement of model on Tweets is more obvious (+2%). Since the context length of Tweets is usually shorter, our proposed fine-grained transfer methods are more adequate between two sub-tasks, leading to more significant improvements compared to other two datasets. For SP, FCKT consistently outperforms other approaches across the three datasets. However, its improvement on the Restaurant dataset is less pronounced compared to the other datasets. We hypothesize that this discrepancy may stem from the relatively high sample diversity in the Restaurant dataset, where the limited interaction information might be insuffi-Task AKT TCL Laptop Restaurant Tweets Avg. Drop AE✓ ✓ 0.8534 0.8685 0.7754 - ✓ ✗ 0.8420 0.8612 0.7713 ▼0.91% ✗ ✓ 0.8451 0.8603 0.7714 ▼0.82% ✗ ✗ 0.8412 0.8568 0.7630 ▼1.45% SP✓ ✓ 0.8612 0.9310 0.8524 - ✓ ✗ 0.8502 0.9214 0.8413 ▼1.20% ✗ ✓ 0.8492 0.9213 0.8381 ▼1.36% ✗ ✗ 0.8334 0.9189 0.8293 ▼2.38% TSA✓ ✓ 0.7144 0.8153 0.6225 - ✓ ✗ 0.6942 0.8064 0.6153 ▼1.67% ✗ ✓ 0.6775 0.7842 0.6072 ▼3.86% ✗ ✗ 0.6724 0.7792 0.5942 ▼4.93% Table 4: The results of different modules on each task. Note that “✓/✗AKT” indicates whether the aspect knowledge transfer strat- egy is used, while “ ✓/✗TCL” denotes the inclusion or exclusion of the token-level contrastive learning mechanism. cient to substantially enhance sentiment prediction.
https://arxiv.org/abs/2505.21040v2
4.4 Ablation Study (RQ3) To examine the contributions of various components, we delve further into FCKT and carry out ablation studies. The results are shown in Table 4. It is evident that the removal of specific modules leads to a decrease in model performance, highlighting the indispensable nature of each module. This underscores the critical role that various model components play in achieving optimal performance. If we delve deeper into the comparisons in TSA task, we can find the “AKT” module holds greater significance, indicated by its lower F1 0.1 0.4 0.7 1.0 ratio 0.770.800.83F1-Score FCKT FCKT w/o RAI 0.1 0.4 0.7 1.0 ratio 0.550.670.80F1-Score FCKT FCKT w/o RAIFigure 4: The results of FCKT w.r.t different parameter ξon two datasets. “FCKT w/o RAI” refers to removing real aspect inputs. 1 2 3 4 Aspect Lengh h0.670.700.73 F1-Score Laptop 0.760.780.80 Restaurant 0.1 0.3 0.5 0.7 Contrastive Weight 0.660.710.76 F1-Score Laptop 0.740.780.82 Restaurant Figure 5: The F1 score of FCKT w.r.t varying parameter handλ. score compared to the other module. This is not surprising, as cross-task knowledge transfer is the core focus of our work, emphasizing the importance of effectively leveraging mutual information between tasks to enhance performance in TSA. 4.5 Parameter Analysis (RQ4) Effect of parameter ξ. In FCKT, a critical parameter is the ratioξ, which controls the connection between different tasks. This parameter determines how much of the predicted aspect knowledge is transferred to the second task. We tune ξwithin the range of 0 to 1, and the results, based on the TSA task, are presented in Figure 4. Across these datasets, we observe that the F1 score initially increases rapidly, then stabilizes, and finally declines as ξincreases from 0.8 or 0.9 to 1. The reason lies in that when ξis small, the model lacks sufficient supervision from the ground truth, leading to suboptimal per- formance. And when ξis very large, the model fails to lever- age the valuable interactions between the two sub-tasks, also resulting in unsatisfactory performance. We observe that the best performances are usually achieved when ξis moderate on two datasets, which implies that neither the ground truth boundary nor distributional boundary is dominantly superior, and a mixture of them can be more favorable. Effect of parameter h. In this study, we investigate the influ- ence of the parameter has it regulates the length of the aspect in Eq. (8). Theoretically, when htakes on a large value, the model ends up with an enormous number of computing memory, making it challenging to achieve convergence. Con- versely, when his excessively small, accurately extracting aspects becomes non-trivial. Considering that aspect term lengths are typically not extensive, we conduct experiments by varying hwithin the range of {1,2,3,4}. In our experi- mental results, presented in Figure 5, we showcase F1 scores for both the Restaurant and Laptop domains. As hincreases, we observe a gradual rise in performance up to a peak, fol- lowed by a decline. This suggests that maintaining a moder- atehvalue (such as 2 or 3) can not only reduce the parameter Input Reviews(1) All the money went into the interior
https://arxiv.org/abs/2505.21040v2
decoration, none of it went to the chefs.AIFI ModelFCKT(Ours) (3) I must say I am surprised by the bad reviews of the restaurant , though the menu's font is small.(2) You will obtain a gift if you buy the separateram memory.[interior decoration, pos][chefs, pos]✗✓[interior decoration, pos] [chefs, neg][separate ram memory, pos][ram memory, pos][restaurant, neg]✗[menu’s font, neg]✓✓✓✓✗[restaurant, neg][menu’s font, neg]✓✗Figure 6: Some examples of different models. The positive aspects are marked in green, and negative aspects are marked in orange. ✓ and✗denote correct and incorrect predictions. count but also ensure accurate aspect extraction. Effect of parameter λ. We also analyze the effect of the contrastive weight λon F1-score across two datasets: Restau- rant and Laptop. The results indicate that the optimal perfor- mance is achieved when λ= 0.1. Asλincreases beyond this point, the F1-score decreases steadily for both datasets, with a sharper decline in the Restaurant domain. This suggests that a smaller λeffectively balances the contrastive and task- specific objectives, while higher values overly emphasize the contrastive loss, negatively impacting overall performance. 4.6 Case Study and Error Analysis (RQ5) Case Study . To offer a deeper understanding of FCKT’s ex- ceptional performance, we present several case studies in a qualitative manner in Figure 6. In 1st case, the aspect de- tection and sentiment prediction in AIFI may be compro- mised due to weak sequential associations. For instance, it successfully identifies the aspect “chefs” but falters in accu- rately predicting the sentiment associated with “chefs” in the 1st case. Conversely, FCKT allows for the backpropagation of sentiment information into the boundary detection process, thereby bolstering correlations and leading to accurate predic- tions. By modeling semantic compatibility for aspect extrac- tion via token-level contrastive learning, FCKT can extract more precise aspects. For example, in the 2nd case, word “separate” should not be included in the aspect of “ram mem- ory”, a distinction that FCKT successfully identifies. Error Analysis . While the FCKT model has shown fa- vorable performance, it still encounters challenges when han- dling complex sentences. For example, in the 3th sentence, due to the ambiguously expressed sentiment associated with “restaurant”, the model struggles to make precise predictions. 5 Conclusion In this paper, we addressed the problem of targeted sentiment analysis (TSA) by proposing a fine-grained cross-task knowl- edge transfer (FCKT) framework. By explicitly integrating aspect-level information into sentiment prediction, FCKT en- ables fine-grained knowledge transfer, effectively mitigating negative transfer and improving task performance. Exten- sive experiments conducted on three real-world benchmark datasets, including comparisons with diverse baselines and state-of-the-art LLMs validate the effectiveness of FCKT. Acknowledgments We sincerely thank all the anonymous reviewers for their valuable comments to improve this paper. This work was supported by the National Key Research and Development Program of China under Grant Nos. 2024YFF0729003, the National Natural Science Foundation of China under Grant Nos. 62176014, 62276015, 62206266, the Fundamental Re- search Funds for the Central Universities. Appendix A. Spliting Strategy In our modeling process, sentences containing multiple as- pect terms are split into separate sentences during training, with each focusing on a single aspect, as illustrated in Fig-
https://arxiv.org/abs/2505.21040v2
ure 3. Here, we demonstrate that this strategy maintains op- timization consistency while enabling fine-grained modeling of aspect-specific interactions. First, when there are multiple aspects within a sentence, The original optimization objective Lcan be rewritten as: Lmul=Lae+Lcl+Lsp =−NX i=1mX j=1 pT s,i,jlog (ˆps,i,j) +pT e,i,jlog (ˆpe,i,j) −NX i=1lX j=1{s(hs,he) +X (x,y)∈Os(hx,hy)} −NX i=1lX j=1tX k=1yi,j,klogˆyi,j,k =−NX i=1mX j=1  pT s,i,jlog (ˆps,i,j) +pT e,i,jlog (ˆpe,i,j) +{s(hs,he)−X (x,y)∈Os(hx,hy)} +tX k=1yi,j,klogˆyi,j,k! , (11) where Nis the number of sentences, mdenotes the sen- tence’s length, and lis the number of aspects in each sen- tence. s(·,·)represents the cosine similarity function used in contrastive learning, defined as s(hx,hy) =hx·hy ∥hx∥∥hy∥, where hxandhyare embeddings of the tokens. Odenotes the set of negative pairs for each aspect, containing pairs of tokens that are not semantically aligned. trefers to the num- ber of sentiment categories (e.g., positive, neutral, and neg- ative). Finally, yi,j,k is the ground-truth label for the k-th sentiment category of the j-th aspect in the i-th sentence, and ˆyi,j,kis the predicted probability distribution for the same. Building on the derivation above, it becomes evident that span-based targeted sentiment analysis can be interpreted as optimizing for a single aspect at a time. This formulation en- sures that the optimization process remains unaffected by thenumber of aspects present in a sentence, allowing for consis- tent and efficient learning. Furthermore, under the assump- tion that each sentence contains only a single aspect, the op- timization objective can be redefined as: Lsep=Lae+Lcl+Lsp =−MX i=1mX j=1  pT s,i,jlog (ˆps,i,j) +pT e,i,jlog (ˆpe,i,j) +{s(hs,he)−X (x,y)∈Os(hx,hy)} +tX k=1yi,j,klogˆyi,j,k! , (12) where Mrepresents the total number of aspects in the original training dataset, defined as M=PN i=1li, where lidenotes the number of aspects in the i-th sentence. The relationship between the two optimization objectives LmulandLsepcan be expressed as: Lmul∝ Lsep. (13) This indicates a direct proportionality between the two ob- jectives, validating the effectiveness of our partitioning strat- egy. By leveraging this relationship, our method simplifies the training process while enhancing the model’s capacity to manage aspect-specific interactions effectively, demonstrat- ing its utility in fine-grained sentiment analysis. B. Datasets To fairly assess our proposed FCKT, we conducted our exper- iments using three public datasets, Laptop, Restaurant, and Tweets. (1) The first dataset, referred to as Laptop, con- sists of customer reviews in the electronic product domain. It was collected from the SemEval Challenge 2014 [Pontiki et al. , 2014 ]. (2) The second dataset, named Restaurant, comprises reviews from the restaurant domain. It is a com- bination of review sets from SemEval2014, SemEval2015, and SemEval2016 [Pontiki et al. , 2014; Pontiki et al. , 2015; Pontiki et al. , 2016 ]. (3) The third dataset, called Tweets, was created by Mitchell et al. [Mitchell et al. , 2013 ]. It comprises Twitter posts from various users. C. Baselines In this study, we compare three categories of models: (1)Pipeline-based models , which process tasks in sequential steps. These include: • CRF-Pipeline [Mitchell et al. , 2013 ]: This paradigm em- ploys a CRF as an aspect sequence extractor, succeeded by a sentiment classifier to
https://arxiv.org/abs/2505.21040v2
achieve the objective. • NN-CRF-Pipeline [Zhang et al. , 2015 ]: Unlike the afore- mentioned model, this paradigm incorporates a shallow neural network model preceding the CRF. • TAG-Pipeline [Huet al. , 2019 ]: It is a sequence tagging approach utilizing a BERT encoder. • SPAN-Pipeline [Huet al. , 2019 ]. It utilizes BERT as the shared encoder for two tasks and subsequently builds its own model for each task. (2)End-to-End models , which directly map inputs to out- puts without intermediate steps. Representative models are: • SPJM [Zhou et al. , 2019 ]: It is a span-based method, which directly heuristically searches the boundaries of the aspect terms and then classifies the extracted aspect boundaries. • SPAN-Joint [Huet al. , 2019 ]: It utilizes BERT as the shared encoder for two tasks and subsequently builds its own model for each task. • S-AESC [Lvet al. , 2021 ]: The aspects and sentiments are generated collaboratively using both dual gated recurrent units and an interaction layer. • HI-ASA [Chen et al. , 2024a ]: A hierarchical interactive network is devised with the aim of enhancing the mutual in- teractions between aspect and sentiment. This network in- corporates input-side interactions as well as output-side in- teractions, forming a two-way communication framework. • DCS [Liet al. , 2023 ]: It proposes a dual-channel span gen- eration method to effectively constrain the search space for span candidates for aspect-sentiment triplet extraction. • MiniConGTS [Sunet al. , 2024 ]: This work proposes a min- imalist tagging scheme and token-level contrastive learning strategy to improve pretrained representations for aspect sentiment triplet extraction, achieving promissing perfor- mance with reduced computational overhead. • PDGN [Zhuet al. , 2024 ]: It introduces the grid noise diffu- sion pinpoint network , a T5-based generative model with three novel modules to address generation instability and improve robustness and effectiveness in aspect sentiment quad prediction tasks. • AIFI [Chen et al. , 2024a ]: The current state-of-the-art model for targeted sentiment analysis. AIFI is a variant of HI-ASA, introducing an adaptive feature interaction frame- work that leverages contrastive learning. (3)LLM-based models , which leverage large language mod- els with advanced reasoning capabilities. We leverage two large language model backbones, GPT-3.5-turbo and GPT-4o, employing four distinct techniques: • Zero-Shot: The model predicts without any prior task- specific examples, relying solely on its pre-trained knowl- edge to understand and perform the task. • Few-Shot: The model is provided with a small number of task-specific examples to guide its predictions, improving accuracy over zero-shot scenarios. We utilized 5-shot, 10- shot, and 20-shot methods, all randomly sampled from the training set. The results indicate that the 5-shot method performed the best, while the performances of the 10-shot and 20-shot methods showed a decline. The tables presents the output results for the 5-shot method. • CoT: This approach involves generating intermediate rea- soning steps, enabling the model to perform complex tasks by breaking them into smaller, logical steps. • CoT + Few-Shot: Combines the strengths of Chain of Thought reasoning and few-shot learning by providingtask-specific examples along with
https://arxiv.org/abs/2505.21040v2
explicit reasoning steps to enhance performance further. D. Evaluation We employ three widely used metrics—precision, recall, and F1 score—to evaluate the effectiveness of our proposed FCKT. For aspect extraction, we focus on the F1 score as the primary evaluation metric, while accuracy is adopted for sen- timent prediction. Notably, a predicted target is considered correct only if it exactly matches both the gold-standard as- pect and its corresponding polarity. To ensure robust and con- sistent evaluation while minimizing the impact of random- ness, we conducted 10 independent experiments and report the average performance across these runs. E. Implementation Details Following previous works [Lvet al. , 2023; Chen et al. , 2024a ], we split the training and test sets for each dataset. For the Tweets dataset, which lacks a predefined train-test split, we perform ten-fold cross-validation to ensure robust eval- uation. In the proposed model, we utilize the BERT-Large model as the backbone network. The maximum length h of an aspect is selected from the range {1,2,3,4}. The ra- tioξof distributional input samples is varied between 0 and 1. Model optimization is conducted using the Adam opti- mizer [Kingma and Ba, 2014 ], with the learning rate searched from{2e-5,2e-3,2e-1}. The batch size is set to 16, and a dropout probability of 0.1 is applied. Additionally, the weight λis tuned over the range {0.1,0.3,0.5,0.7}. All ex- periments are implemented using the PyTorch framework and conducted on Nvidia GeForce Titan RTX 3090 GPUs, ensur- ing efficient training and reliable performance. References [Caiet al. , 2021 ]Hongjie Cai, Rui Xia, and Jianfei Yu. Aspect-category-opinion-sentiment quadruple extraction with implicit aspects and opinions. In ACL-IJCNLP , 2021. [Chen et al. , 2022a ]Hao Chen, Zepeng Zhai, Fangxiang Feng, Ruifan Li, and Xiaojie Wang. Enhanced multi- channel graph convolutional network for aspect sentiment triplet extraction. In ACL, 2022. [Chen et al. , 2022b ]Wei Chen, Jinglong Du, Zhao Zhang, Fuzhen Zhuang, and Zhongshi He. A hierarchical inter- active network for joint span-based aspect-sentiment anal- ysis. In COLING , 2022. [Chen et al. , 2024a ]Wei Chen, Yuxuan Liu, Zhao Zhang, Fuzhen Zhuang, and Jiang Zhong. Modeling adaptive inter-task feature interactions via sentiment-aware con- trastive learning for joint aspect-sentiment prediction. In AAAI , 2024. [Chen et al. , 2024b ]Wei Chen, Yiqing Wu, Zhao Zhang, Fuzhen Zhuang, Zhongshi He, Ruobing Xie, and Feng Xia. Fairgap: Fairness-aware recommendation via gen- erating counterfactual graph. TOIS , 2024. [Chen et al. , 2024c ]Wei Chen, Meng Yuan, Zhao Zhang, Ruobing Xie, Fuzhen Zhuang, Deqing Wang, and Rui Liu. Fairdgcl: Fairness-aware recommendation with dynamic graph contrastive learning. arXiv preprint arXiv:2410.17555 , 2024. [Devlin et al. , 2019 ]Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL , 2019. [Huet al. , 2019 ]Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. Open-domain targeted senti- ment analysis via span-based extraction and classification. InACL, 2019. [Jaiswal et al. , 2020 ]Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. A survey on contrastive self-supervised learn- ing.Technologies
https://arxiv.org/abs/2505.21040v2
, 2020. [Kalbhor and Goyal, 2023 ]Shraddha Kalbhor and Dinesh Goyal. Survey on absa based on machine learning, deep learning and transfer learning approach. In AIP Confer- ence Proceedings , 2023. [Kingma and Ba, 2014 ]Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [Liet al. , 2023 ]Pan Li, Ping Li, and Kai Zhang. Dual- channel span for aspect sentiment triplet extraction. In EMNLP , 2023. [Liang et al. , 2021 ]Bin Liang, Wangda Luo, Xiang Li, Lin Gui, Min Yang, Xiaoqi Yu, and Ruifeng Xu. Enhanc- ing aspect-based sentiment analysis with supervised con- trastive learning. In CIKM , 2021. [Lin and Yang, 2020 ]Peiqin Lin and Meng Yang. A shared- private representation model with coarse-to-fine extraction for target sentiment analysis. In EMNLP: Findings , 2020. [Liu, 2012 ]Bing Liu. Sentiment analysis and opinion min- ing. In Synthesis Lectures on Human Language Technolo- gies., 2012. [Luoet al. , 2019 ]Huaishao Luo, Tianrui Li, Bing Liu, and Junbo Zhang. Doer: Dual cross-shared rnn for aspect term-polarity co-extraction. In ACL, 2019. [Luoet al. , 2024 ]Haoran Luo, Haihong E, Yuhao Yang, Tianyu Yao, Yikai Guo, Zichen Tang, Wentai Zhang, Shiyao Peng, Kaiyang Wan, Meina Song, Wei Lin, Yifan Zhu, and Anh Tuan Luu. Text2nkg: Fine-grained n-ary re- lation extraction for n-ary relational knowledge graph con- struction. In NeurIPS , 2024. [Lvet al. , 2021 ]Yanxia Lv, Fangna Wei, Ying Zheng, Cong Wang, Cong Wan, and Cuirong Wang. A span-based model for aspect terms extraction and aspect sentiment classification. Neural Computing and Applications , 2021. [Lvet al. , 2023 ]Haoran Lv, Junyi Liu, Henan Wang, Yaom- ing Wang, Jixiang Luo, and Yaxiao Liu. Efficient hybrid generation framework for aspect-based sentiment analysis. InEACL , 2023. [Mitchell et al. , 2013 ]Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. Open domain targeted sentiment. In EMNLP , 2013.[Nath and Dwivedi, 2024 ]Deena Nath and Sanjay K Dwivedi. Aspect-based sentiment analysis: approaches, applications, challenges and trends. KIS, 2024. [Pontiki et al. , 2014 ]Maria Pontiki, Dimitrios Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. Semeval-2014 task 4: Aspect based sentiment analysis. In SemEval , 2014. [Pontiki et al. , 2015 ]Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopou- los. Semeval-2015 task 12: Aspect based sentiment anal- ysis. In SemEval , 2015. [Pontiki et al. , 2016 ]Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, AL-Smadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph ´ee De Clercq, et al. Semeval-2016 task 5: Aspect based sentiment analysis. In SemEval , 2016. [Sunet al. , 2024 ]Qiao Sun, Liujia Yang, Minghao Ma, Nanyang Ye, and Qinying Gu. MiniConGTS: A near ulti- mate minimalist contrastive grid tagging scheme for aspect sentiment triplet extraction. In EMNLP , 2024. [Tang et al. , 2020 ]Hao Tang, Donghong Ji, Chenliang Li, and Qiji Zhou. Dependency graph enhanced dual- transformer structure for aspect-based sentiment classifi- cation. In ACL, 2020. [Wang et al. , 2016 ]Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. Recursive neural condi- tional random
https://arxiv.org/abs/2505.21040v2
fields for aspect-based sentiment analysis. EMNLP , 2016. [Wang et al. , 2023 ]Zengzhi Wang, Qiming Xie, Zixiang Ding, Yi Feng, and Rui Xia. Is chatgpt a good sen- timent analyzer? a preliminary study. arXiv preprint arXiv:2304.04339 , 2023. [Weiet al. , 2022 ]Jason Wei, Xuezhi Wang, Dale Schuur- mans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits rea- soning in large language models. NeurIPS , 2022. [Xiong et al. , 2022 ]Haoliang Xiong, Zehao Yan, Hongya Zhao, Zhenhua Huang, and Yun Xue. Triplet contrastive learning for aspect level sentiment classification. Mathe- matics , 2022. [Xuet al. , 2021 ]Lu Xu, Yew Ken Chia, and Lidong Bing. Learning span-level interactions for aspect sentiment triplet extraction. In ACL, 2021. [Yang et al. , 2020 ]Yunyi Yang, Kun Li, Xiaojun Quan, Weizhou Shen, and Qinliang Su. Constituency lattice en- coding for aspect term extraction. In COLING , 2020. [Yuan et al. , 2023 ]Meng Yuan, Fuzhen Zhuang, Zhao Zhang, Deqing Wang, and Jin Dong. Knowledge-based multiple adaptive spaces fusion for recommendation. In RecSys , 2023. [Yuan et al. , 2025a ]Meng Yuan, Yutian Xiao, Wei Chen, Chu Zhao, Deqing Wang, and Fuzhen Zhuang. Hyperbolic diffusion recommender model. WWW , 2025. [Yuan et al. , 2025b ]Meng Yuan, Zhao Zhang, Wei Chen, Chu Zhao, Tong Cai, Deqing Wang, Rui Liu, and Fuzhen Zhuang. Hek-cl: Hierarchical enhanced knowledge-aware contrastive learning for recommendation. TOIS , 2025. [Zhang et al. , 2015 ]Meishan Zhang, Yue Zhang, and Duy Tin V o. Neural networks for open domain targeted sentiment. In EMNLP , 2015. [Zhang et al. , 2022 ]Rui Zhang, Yangfeng Ji, Yue Zhang, and Rebecca J Passonneau. Contrastive data and learn- ing for natural language processing. In NAACL: Human Language Technologies: Tutorial Abstracts , 2022. [Zhong et al. , 2024a ]Yan Zhong, Xingyu Wu, Li Zhang, Chenxi Yang, and Tingting Jiang. Causal-iqa: Towards the generalization of image quality assessment based on causal inference. In ICML , 2024. [Zhong et al. , 2024b ]Yan Zhong, Ruoyu Zhao, Chao Wang, Qinghai Guo, Jianguo Zhang, Zhichao Lu, and Luziwei Leng. Spike-ssm: A sparse, precise, and efficient spik- ing state space model for long sequences learning. arXiv preprint arXiv:2410.17268 , 2024. [Zhong et al. , 2025 ]Yan Zhong, Xinping Zhao, Guangzhi Zhao, Bohua Chen, Fei Hao, Ruoyu Zhao, Jiaqi He, Lei Shi, and Li Zhang. Ctd-inpainting: Towards the coherence of text-driven inpainting with blended diffusion. Informa- tion Fusion , 2025. [Zhou et al. , 2019 ]Yan Zhou, Longtao Huang, Tao Guo, Jizhong Han, and Songlin Hu. A span-based joint model for opinion target extraction and target sentiment classifi- cation. In IJCAI , 2019. [Zhou et al. , 2024 ]Changzhi Zhou, Dandan Song, Yuhang Tian, Zhijing Wu, Hao Wang, Xinyu Zhang, Jun Yang, Ziyi Yang, and Shuhao Zhang. A comprehensive evalu- ation of large language models on aspect-based sentiment analysis. arXiv preprint arXiv:2412.02279 , 2024. [Zhuet al. , 2024 ]Linan Zhu, Xiangfan Chen, Xiaolei Guo, Chenwei Zhang, Zhechao Zhu, Zehai Zhou, and Xiangjie Kong. Pinpointing diffusion grid noise to enhance aspect sentiment quad prediction. In
https://arxiv.org/abs/2505.21040v2
arXiv:2505.21045v1 [cs.AI] 27 May 20251 Large Language Model-enhanced Reinforcement Learning for Low-Altitude Economy Networking Lingyi Cai, Ruichen Zhang, Changyuan Zhao, Yu Zhang, Jiawen Kang, Senior Member, IEEE , Dusit Niyato, Fellow, IEEE , Tao Jiang, Fellow, IEEE , and Xuemin Shen, Fellow, IEEE Abstract —Low-Altitude Economic Networking (LAENet) aims to support diverse flying applications below 1,000 meters by deploying various aerial vehicles for flexible and cost-effective aerial networking. However, complex decision-making, resource constraints, and environmental uncertainty pose significant chal- lenges to the development of the LAENet. Reinforcement learning (RL) offers a potential solution in response to these challenges but has limitations in generalization, reward design, and model stability. The emergence of large language models (LLMs) offers new opportunities for RL to mitigate these limitations. In this paper, we first present a tutorial about integrating LLMs into RL by using the capacities of generation, contextual understanding, and structured reasoning of LLMs. We then propose an LLM- enhanced RL framework for the LAENet in terms of serving the LLM as information processor, reward designer, decision-maker, and generator. Moreover, we conduct a case study by using LLMs to design a reward function to improve the learning performance of RL in the LAENet. Finally, we provide a conclusion and discuss future work. Index Terms —Low-altitude economy networking, reinforce- ment learning, large language model, decision-making, reward design. I. I NTRODUCTION Low-altitude economic networking (LAENet) refers to the integration of communication and network infrastructure de- signed to support the deployment of manned and unmanned aerial vehicles (UA Vs) in the airspace below 1,000 meters [1]. The primary goal is to generate commercial and societal value through diverse aerial operations. Specifically, the LAENet is distinguished by its high mobility, adaptive deployment, and cost-effectiveness [2]. For example, these aerial platforms can serve as mobile and cost-effective communication nodes capable of acting as aerial base stations, communication relays, and edge computing devices [2]. Thus, the LAENet can Lingyi Cai is with the Research Center of 6G Mobile Communications, School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China, and also with the College of Computing and Data Science, Nanyang Technological University, Singapore (e-mail: [email protected]). Ruichen Zhang, Changyuan Zhao, and Dusit Niyato are with the Col- lege of Computing and Data Science, Nanyang Technological University, Singapore (e-mails: [email protected]; [email protected]; dniy- [email protected]). Yu Zhang and Tao Jiang are with the Research Center of 6G Mobile Communications, School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China (e-mail: [email protected]; [email protected]). Jiawen Kang is with the School of Automation, Guangdong University of Technology, Guangzhou 510006, China (e-mail: [email protected]). Xuemin Shen is with the Department of Electrical and Computer Engi- neering, University of Waterloo, Waterloo, ON N2L 3G1, Canada (e-mail: [email protected]).support diverse applications such as intelligent transportation, disaster response, and ubiquitous telecommunications. How- ever, these advantages are accompanied by several significant challenges: real-time decision-making for UA V operation and network coordination; environmental uncertainties with unpre- dictable channel conditions and user mobility; and resource- constrained heterogeneity such as limited energy and commu- nication capacity [1]. To address
https://arxiv.org/abs/2505.21045v1
these challenges, reinforcement learning (RL) emerges as a promising solution for the LAENet [3]. Specifi- cally, RL enables autonomous and adaptive control, allowing aerial vehicles to make time-sensitive decisions without re- liance on predefined models. By continuously observing and interacting with dynamic environments, RL can facilitate ro- bust decision-making under uncertainty. Additionally, RL can learn optimized policies to manage resources under constraints of energy, bandwidth, and computational capacity. However, classical RL still has limitations in addressing the challenges faced by the LAENet [4]. Classical RL often struggles with generalization, as models trained for specific tasks lack the capacity to adapt to new and dynamic scenarios. Importantly, the manual design of reward functions in classical RL can lead to suboptimal policies or unintended behaviors if not properly formulated. Furthermore, the RL methods are prone to performance degradation due to error accumulation in the decision-making process. Recently, with the rise of large language models (LLMs) technology, enhancing classical RL by leveraging the strengths of LLMs as an integrated approach to address key challenges in the LAENet has emerged as a promising research direction. LLMs trained on massive and diverse datasets exhibit a strong capacity for generation, contextual understanding, and structured reasoning [4], [5]. For example, LLMs can perform cross-domain knowledge transfer to provide relatively accurate responses when facing varying inputs and cross-scenario tasks. Through their chain-of-thought reasoning and understanding, LLMs can construct effective signals for decision-making by capturing nuanced trade-offs and important objectives in the context of tasks [6]. These properties make LLMs suitable for integration into RL that requires flexibility, adaptability, and policy learning under limited prior knowledge and an uncertain environment. In this paper, we first provide a tutorial about utilizing LLMs to integrate into RL by reviewing existing related works, which shows the benefits of LLMs in improving the performance of classical RL. Then, we propose an LLM-enhanced RL framework in the context of the LAENet, where the LLM is 2 used to interpret complex multimodal inputs, shape adaptive reward functions, to guide or generate action sequences, and simulate future state transitions in the learning process. In addition, we implement a case study on utilizing the LLM to shape the reward function of RL to optimize the energy consumption in the LAENet. Finally, we conclude this paper and explore the future research directions on LLM-enhanced RL for the LAENet. The main contributions of this paper are summarized as follows: 1) We comprehensively analyze how LLMs can address the limitations of classical RL and synthesize existing re- search to guide future integration efforts, which provides a useful foundation for leveraging LLMs to enhance the capabilities of RL. 2) We propose a novel LLM-enhanced RL framework for the LAENet, where the LLM functions as an information processor, reward designer, decision-maker, and simula- tor, effectively unifying the complementary strengths of LLMs and RL. 3) We provide a case study to demonstrate the benefit of using LLMs to design reward functions for RL agents in the LAENet. Compared to manually crafted rewards, the LLM-designed reward leads to more efficient learning and improved performance. II. O
https://arxiv.org/abs/2505.21045v1
VERVIEW OF ENHANCING RL WITH LLM In this section, we comprehensively review the background knowledge of RL and LLM. Then, we highlight the potential of LLMs for enhancing RL. A. Background of RL RL is a foundational ML paradigm driven by a trial- and-error mechanism, where the agent observes the current state of the environment to select the action, and receives a reward signal that reflects the quality of the action [4]. The agent’s objective is to maximize the cumulative long- term reward, often referred to as the return. The expected return is improved by iteratively updating the agent’s policy to eventually converge to an optimal or near-optimal solution [7]. In this context, RL agents have the ability to learn optimal strategies through interaction with dynamic environments for real-time decision-making and autonomous coordination. RL has evolved from early model-free methods (such as Q-learning) to advanced DRL techniques capable of handling high-dimensional inputs. Key milestones include the introduc- tion of Deep Q-Networks (DQN) and subsequent algorithms such as PPO and SAC for applying RL in complex environ- ments, as shown in Fig. 1. However, several challenges still limit the performance of RL. 1) Generalization and Multimodal Understanding: In the dynamic and uncertain environments, classical RL trained in specific environments may be difficult to generalize to new and complex scenarios due to lacking the ability to process multimodal data (e.g., visual and language data for RL agents in robotics applications).2) Reward Function Design and Feedback: It is challenging to define reward functions that trade off among multiple objectives in the RL. Inappropriately designed rewards can lead to suboptimal policy learning and unintended behaviors. 3) Model Instability and Lack of Interpretability: Model- based RL suffers from error accumulation of models in dy- namic environments. Moreover, the decision-making process of classical RL lacks interpretability and may be unsuitable for safety-critical scenarios. LLMs are advanced deep learning models trained on exten- sive datasets at the terabyte-level and typically characterized by billions of parameters [4], [5]. As shown in Fig. 1, the development of LLMs has been driven by the Transformer architecture that laid the foundation for bidirectional masked language modeling (e.g., BERT) to autoregressive generative pretraining (e.g., GPT series). With continued scaling of model size and data, the release of open-source foundation models such as LLaMA aims to reduce the number of parameters while maintaining model performance. Such massive scale of data and model complexity enables LLMs to achieve remarkable capabilities in language generation, knowledge representation, and logical reasoning. In this case, some favor- able properties of LLMs have the potential to enhance classical RL to overcome the above limitations, as shown in Fig. 1. B. Background of LLM and Its Potential to Enhance RL 1) Generalization and Multimodal Comprehension: LLMs are pretrained on broad datasets and can process diverse data modalities, such as textual commands, spatial layouts, and raw visual inputs. These complex inputs may be difficult for clas- sical RL agents to interpret and generalize due to reliance on domain-specific encoders. To overcome these limitations, the authors in [8] leveraged frozen LLMs as high-level planners
https://arxiv.org/abs/2505.21045v1
to bridge abstract natural language instructions with motion control in robotic tasks, thereby enable generalization of RL agent using only raw visual observations without task-specific retraining. The sample efficiency of the proposed scheme outperformed Long Short-Term Memory (LSTM)-based base- lines by 18.5%. Similarly, the study in [9] utilized a frozen pre-trained language model to compress past observations into semantic representations, enabling agents to learn from historical context for generalization without retraining and to outperform the hierarchical RL baselines by 85%. 2) Context-Aware and Reward Shaping: LLMs can shape reward functions to balance multiple objectives or constraints for RL agents by using domain knowledge and context com- prehension, which address the limitation of RL’s static or unsuitable reward function that requires extensive domain expertise. For example, the work in [7] shaped RL agent’s reward model by leveraging LLMs to prompt examples and preference descriptions to guide agent behavior. The LLM- based rewards achieved 91% accuracy in the Ultimatum Game versus 67% for conventional reward engineering approaches. In a more complex application of active distribution networks, the study in [12] utilized LLMs to enable context-aware reward shaping for RL agents by leveraging domain knowledge and iterative refinement through multi-round dialogues. As a result, 3 11 Watkins, Christopher JCH, and Peter Dayan. "Q -learning." Machine learning 8 (1992): 279 -292. Kaelbling, Leslie Pack, Michael L. Littman, and Andrew W. Moore. "Reinforcement learning: A survey." Journal of artificial intelligence research 4 (1996): 237 -285. Mnih, Volodymyr, et al. "Playing atari with deep reinforcement learning." arXiv preprint arXiv:1312.5602 (2013). Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." nature 529.7587 (2016): 484 -489. Schulman, John, et al. "Proximal policy optimization algorithms." arXiv preprint arXiv:1707.06347 (2017). Haarnoja, Tuomas, et al. "Soft actor -critic: Off -policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. Pmlr, 2018. Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017). Radford, Alec, et al. "Improving language understanding by generative pre -training." (2018). Devlin, Jacob, et al. "Bert: Pre -training of deep bidirectional transformers for language understanding." Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volum e 1 (long and short papers). 201 8. Radford, Alec, et al. "Language models are unsupervised multitask learners." OpenAI blog 1.8 (2019): 9. Brown, Tom, et al. "Language models are few -shot learners." Advances in neural information processing systems 33 (2020): 1877 -1901. Touvron, Hugo, et al. "Llama: Open and efficient foundation language models." arXiv preprint arXiv:2302.13971 (2023).12 3 4 5 6 3 4 5 6[9] 2[8] 3[7] 4[12] 5[10] 6Evolution of RL Evolution of LLMs Existing Research on LLM -Enhanced RL Q-Learning1992 RL DRL AlphaGo PPO SACTransformer model BERT modelGPT GPT-3 LLaMAGPT-2 GPT-4RL LLM LLM-enhanced RL 1996 2013 2016 2017 2018 2019 2020 2022 LLMs guide RL via language -to-vision generalization . 2023 2024LLMs shape RL rewards via prompts and preferences. LLMs compress history for RL generalization without retraining. LLMs enable contextual reward shaping via dialogue
https://arxiv.org/abs/2505.21045v1
. LLMs enhance offline RL with reasoning and stability . LLMs generate rule -based controllers to refine RL .2LLMs can enhance RL in the following three aspects: • Generalization and Multimodal Comprehension • Context -Aware and Reward Shaping • Structured Reasoning and Stable Decisions[13]1 2 3 4 5 61 2 3 4 5 61 23 4 5 6 Fig. 1. An illustration of RL, LLM, and applications of LLM-enhanced RL. The number of peer-reviewed publications regarding RL, LLM, and LLM-enhanced RL per year is shown on the left-hand side (the publication data was collected from IEEE Xplore in April 2025). TABLE I SUMMARY OF RECENT PROJECTS ON LLM- ENHANCED RL Related Ref. GitHub LinkTasks SupportedLLM type Training Data Performance Metrics Last Update [8]https://github.com/ ml-jku/helmPartially Observable RLTransformer- XLRandomMaze, Minigrid, ProcgenSample Efficiency, IQM Dec. 2024 [9]https://github. com/mihdalal/ planseqlearnLong-horizon robot controlGPT-4Meta-World, ObstructedSuite, Kitchen, RobosuiteSample Efficiency, Task Success RateAug. 2024 [7]https://github.com/ minaek/reward design with llmsReward design in RLGPT-3Ultimatum Game, Matrix Games, DEALORNODEALLabeling Accuracy, User Alignment ScoreMay. 2023 [10]https://github.com/ srzer/LaMo-2023Offline RL GPT-2 D4RL, d4rl-atariSample Efficiency, Sparse Reward PerformanceJun. 2024 [11]https://github.com/ noahshinn/reflexionReasoning and ProgrammingGPT, starchat-betaALFWorld, HotPotQA, HumanEval, MBPPPass@1 Accuracy, Exact Match, Hallucination RateJan. 2025 the LLM-based reward-shaping approach reduced performance variance by 89.4% over conventional fixed-reward methods. 3) Structured Reasoning and Stable Decisions: LLMs may achieve more transparency in decision-making for RL agents by supporting step-by-step and structured reasoning through prompting strategies such as Chain of Thought. Meanwhile, the decision-making in the RL can be more stable due to the capacity of LLMs in transferring knowledge across do- mains. For instance, the authors in [10] enhanced decision stability and structured reasoning in offline RL by leveraging pre-trained language models’ sequential knowledge and lin- guistic representations. Compared to the value-based offline RL algorithm, this scheme reduces performance variance by 40% even if the data is 1% of the whole dataset. The work in [13] utilized LLMs to generate structured and rule- based controllers through step-by-step prompting for robotic manipulation, where the RL benefits from the integration with these controllers to stabilize and refine policy learning. The proposed scheme maintains error rates below 0.12% in dynamic manipulation tasks compared to 4.7% of the TD3algorithm baseline. III. LLM- ENHANCED RL FOR LAEN ET Based on the above analysis, the concept of LLM-enhanced RL can be defined as the methods that integrate the high- level cognitive capabilities of LLMs, such as multimodal information processing, understanding, reasoning, planning, and generating, into the RL paradigm. Thus, we propose an LLM-enhanced RL framework for the LAENet, as shown in Fig. 2, which leverages the complementary strengths of LLMs and RL to address the limitations of classical RL. A. Overview of LLM-enhanced RL Framework The LLM-enhanced RL framework for the LAENet lever- ages the high-level cognitive capabilities of LLMs to augment multiple stages of RL. It processes environmental states, generates actions, simulates outcomes, and shapes rewards with the support of LLMs while the RL agent iteratively learns optimal policies through continuous interaction, as shown 4 Fig. 2. An overview of the LLM’s multiple roles in reinforcement learning, including information processor, reward designer, decision-maker, and generator, highlighting
https://arxiv.org/abs/2505.21045v1
its central role in bridging language input and decision-making processes within the LAENet framework. in Fig. 2. Thus, the classical RL loop with the support of LLMs ensures that the LAENet can handle dynamic, uncertain, and multimodal real-world scenarios with greater flexibility, collaboration, and generalization. Specifically, the four key roles of LLMs for enhancing RL in the LAENet are detailed as follows. 1) LLM as Information Processor: As information pro- cessors, LLMs play a key role in decoupling the burden of interpreting complex, multimodal data from the RL agent. On the one hand, LLMs can extract meaningful features from raw observations using powerful pre-trained models. The aerial ve- hicles with generalization capabilities can quickly understand the changing environment (e.g., variations in communication conditions, weather, or terrain) and generate effective state representations for RL without retraining, as shown in Fig. 2. For example, if an aerial vehicle’s camera captures the weather changing from sunny to foggy, LLMs processes this input and outputs a compressed feature vector (e.g., “visibility=low”) as part of the RL state space. On the other hand, the LAENet with LLMs can reduce learning complexity for the RL agent by transferring informal natural language information (such as from ground control or user requests) into a formal task- specific language [14]. When ground control updates instruc- tions (e.g., “Emergency: Redirect to Zone B”), LLMs can interpret the urgency of redirect and update the RL agent’s objective to prioritize reaching Zone B. 2) LLM as Reward Designer: As reward designers, LLMs can leverage extensive pre-trained knowledge and reasoning capabilities to shape and refine reward functions for RL agents. Specifically, LLMs can provide reward signals by generating task-relevant functions by interpreting descriptions and ob-servations. Furthermore, LLMs serve as reward designers by generating executable reward function code, which delineates the calculation process and can be iteratively refined with feedback. Taking the task scheduling scenario in LAENet as an example, LLMs can generate task-specific reward func- tions by processing textual descriptions of objectives, such as “minimize critical task delay”, “prioritize low-energy UA Vs”, and “balance energy consumption and latency”. Then, LLMs convert objectives into executable code snippets that compute reward signals during RL training. The feedback from the training process and changing environment can be used by LLMs to continuously refine the reward function, thereby overcoming the fixity and complexity of manually crafting rewards. 3) LLM as Decision-maker: LLMs act as decision-makers in RL by guiding or generating action sequences through their pre-trained knowledge, structured reasoning, and language understanding. As action-guiders, LLMs assist in reducing the action search space by generating high-quality action candi- dates based on task comprehension and contextual prompts. In action generation, LLMs are fine-tuned or prompted with task goals and trajectory histories to predict high-reward expert action policies. These candidates or expert actions can be incorporated and distilled into RL agents to regularize the policy learning. In scenarios where aerial vehicles need to optimize their trajectories to provide communication coverage for users, LLMs act as action-guiders by processing the current UA V state (e.g., location, nearby users, and energy levels) and generating
https://arxiv.org/abs/2505.21045v1
a set of action candidates (e.g., “hover above user cluster A,” “move south 10 meters,” “move down 3 meters”). RL agents can select the most rewarding action 5 def reward _func ( ): reward = w * energy * Penalty_terms return reward Agent EvaluationQualifiedUnqualifiedRole definition : You are good at understanding tasks and writing python codes. You should fully understand the provided task and describe the exact observation form in the current system model. Then, based on your understanding, analyze potential positive and negative behaviors or statuses that can be reflected in ... Note: Do not use information you are not given! Focus on the most relevant evaluation factors and use information in observation as little as possible. … Please think step by step and adhere to the following JSON format: { "Understand": "(your thought about the system model and observation)", "Analyze": "(your step -by-step analysis of potential good/bad statuses)", "Functions": "(a python function like 'def reward _func( factors ): ...')" } Task description : In a UAV -assisted IoT network , the UAV flies to an optimal hovering position, transfers wireless energy to IoT terminals, collects data from them, and then offloads it to a Mobile Base Station ... Objectives: Minimize total energy consumption, ...Please infer possible important reward factors based on the task objective from the perspective of reinforcement learning reward design, and design a composite reward funct ion based on these factors. Requirements: The input is a list containing these factors (each factor is a (batch_size, 1) array) . The output is a (batch_size, 1) reward array . … Please directly provide standard Python code in the following format: def reward_function(factors): ... return final_reward { "Understand" : "The system involves a UAV that assists thee IoT network by flying to optimal position ...", "Analyze": " 1. High residual energy in terminal s is positive as it indicates sufficient energy for operations ...", "Functions" : "def reward _func(factors) ..." }LLM-designed reward function def reward _func (factors): energy = factors[ 0] position = factors[ 1] reward = (w1 * energy + w2 * position) * Penalty_terms return reward Manually designed functionPolicy controller State RewardAction Env MBS IoT terminals Fig. 3. UA V-assisted IoT network with LLM-designed reward funtioon for RL in the LAENet. The UA V agent interacts with the environment by selecting actions based on observed states. The LLM generates the reward function based on structured prompt input of role definition and task description. The generated reward function is evaluated through predefined constraints before being applied to policy learning. using a value function, which may effectively avoid the large scale and noise in the action space of regular RL algorithms. Meanwhile, if prior UA V trajectory sequences resulted in incomplete communication coverage, LLMs can reason based on trajectory histories: “UA V is near a user cluster A; the unserved users are mostly north. It has 30% battery. Therefore, it should head north at low speed” to generate actions. 4) LLM as Generator: LLMs can predict future state and reward sequences based on current observations and actions, which enables model-based RL
https://arxiv.org/abs/2505.21045v1
agents to learn from simulated experiences. In UA V trajectory optimization tasks, states (e.g., UA V location, user distribution, and remaining energy) and actions (e.g., flight direction, and speed) can be input into the LLM to generate state-action-reward sequences. Subse- quently, the LLM can produce large amounts of simulated trajectory data to support policy learning, thereby alleviating the limitation of classical RL’s reliance on interactions with environment. In addition, by prompting LLMs with decision paths of RL, LLMs can generate natural language to interpret policies to demonstrate their trust and transparency. The oper- ators serving LAENet can trust the RL policy decision paths (e.g., UA V moved toward access point kto serve user n) since the LLM can generate interpretable descriptions “UA V chose to move east to minimize energy cost and meet the imminent offloading deadline of user n.” B. Workflow of LLM-enhanced RL framework for LAENet We take UA V-assisted data collection for the Internet of Things (IoT) in LAENet as an example to illustrate the workflow of LLM-enhanced RL framework in Fig. 2. Step 1: State Perception and Abstraction. The interaction between the UA V and the environment is modeled as a MarkovDecision Process (MDP)1, where each state captures spatial, energy, and communication conditions relevant to decision- making. Leveraging the capabilities of LLMs, the UA V ab- stracts natural language instructions or sensor descriptions (e.g., a command such as “capture aerial images of a congested area” or a sensor report indicating “battery level is low at ter- minal n”) into compact and informative state representations. Step 2: Action Selection and Policy Execution. Based on the current state and the learned policy, the agent generates actions to optimize long-term objectives (e.g., minimizing energy consumption) by prompting and guiding the LLM to reason about adjustments to flight paths or scheduling of data collection. This process can be governed by various methods of reinforcement learning, including policy-based approaches2, value-based methods3, and model-based techniques4. Step 3: Reward Evaluation and Feedback Processing. After executing an action, the agent receives an informative and adaptive reward shaped by the LLM to quantify perfor- mance with respect to predefined objectives. For example, the user may say “I am happy with the service speed” which means the cost in terms of delay is low. These enriched reward signals guide the agent toward more effective optimization and better respond to system constraints. 1MDP is a mathematical framework for modeling decision-making situ- ations where outcomes are partly random and partly under the control of a decision-maker. It consists of states, actions, transition probabilities, and rewards. 2Policy-based methods directly learn a mapping from states to actions (i.e., a policy), such as Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3). 3Value-based methods estimate the expected return (value) of taking an action in a given state, and derive the policy from these values. Common examples include Q-learning and Deep Q-Networks (DQN). 4Model-based techniques learn a model of the environment’s dynamics (i.e., transition probabilities) and use this model for planning or policy learning, such as in Dyna-Q or Model
https://arxiv.org/abs/2505.21045v1
Predictive Control (MPC). 6 Step 4: Policy Update and Knowledge Integration. Based on accumulated experience, the agent uses the LLM to update and refine its policy by integrating external knowledge (such as statistical information on terminal data transmission patterns or communication conditions in certain areas of UA V hovering), enabling generalization across tasks, and assisting in the interpretation of policy behaviors to support human-in-the- loop optimization. Embedding LLMs into each stage of the RL loop enables agents to operate more intelligently and adaptively in complex, dynamic environments. This integration enhances learning efficiency, improves generalization to unknown scenarios, and facilitates human-aligned decision-making in the LAENet. IV. C ASE STUDY : LLM AS REWARD DESIGNER TO ENHANCE RL FOR ENERGY OPTIMIZATION IN LAEN ET A. System Overview We consider a UA V-assisted IoT scenario in LAENet, in- volving a UA V , a macro base station (MBS), and multiple distributed IoT terminals, as shown in Fig. 3. The UA V flies at fixed altitude and constant speed to dynamically hover near IoT terminals. The terminals utilize energy harvested from the UA V to transmit data, which is subsequently aggregated by the UA V and relayed to the MBS. The system’s total energy consumption includes the transmission energy of terminals, the propulsion and communication energy of the UA V . The objective is to minimize total energy under constraints on power limits, data throughput, decoding reliability, and data freshness. B. Implementation Details of LLM as Reward Designer We provide a detailed implementation of the LLM as reward designer module for the LAENet 1) User Prompt Design: The users need to guide LLMs to think in the role of reward designers in RL, ensuring that LLMs can understand the task in the environment, the rules for designing rewards, and the coding ability for reward func- tion generation. Therefore, the users are required to provide effective prompts to the LLMs. Based on chain-of-thought techniques [6] and appropriate prompt design [15], we propose a guideline for users’ prompts which are divided into two elements: role definition and task description. Role Definition: As shown in Fig. 3, the role definition has three parts. The first part specifies the functions of the LLM as a reward designer, including comprehending the system model of the task, reasoning based on observations, and using Python programming to generate the reward function accordingly. The second part includes a set of notes for the LLM as fundamental normative constraints, such as not using ungiven information and focusing on the most relevant factors. The third part standardizes the output format of the LLM’s response to the prompts, i.e., a JSON format that is easy to parse. Task description: Task description defines the system model and optimization objective of a specific scenario, which helps the LLM understand task-relevant contextual informa- tion, avoids overly generalized responses, and reduces the burden of prompt engineering across tasks. It also requires 0 20 40 60 80 100 120 140 160 180 200 Episodes 51005200530054005500560057005800590060006100Energy consumptionDDPG Algorithm with Manually Designed Reward Function DDPG Algorithm with LLM-Designed Reward Function TD3 Algorithm with Manually
https://arxiv.org/abs/2505.21045v1
Designed Reward Function TD3 Algorithm with LLM-Designed Reward Function0 205800590060006100Fig. 4. Energy consumption over episodes of different algorithm with manually designed and LLM-generated reward functions. t] 2.0 2.4 2.8 Size of packet (Mbits)45004600470048004900500051005200530054005500Energy consumptionDDPG Algorithm with Manually Designed Reward Function DDPG Algorithm with LLM-Designed Reward Function TD3 Algorithm with Manually Designed Reward Function TD3 Algorithm with LLM-Designed Reward Function Fig. 5. Effect of packet size on energy consumption using different reward design methods. the LLM to infer important reward factors in designing the reward function, which can effectively guide the RL agent to optimize the actions in the learning process. Additionally, the input and output formats for factors and the reward function are standardized. 2) LLM Response for Reward Design: The LLM receives the prompts from the user and generates the reward function through code generation ability along with logical reasoning. However, the response of the LLM is stochastic due to its probabilistic nature [12]. In addition, the LLM may hallucinate and generate code that appears reasonable but is actually non- executable [9]. LLM-designed Reward Evaluation: Inspired by the recent work [11], the LLM is required to generate multiple candidate reward functions rather than relying on a single random response. Specifically, each candidate function is generated by prompting the LLM using the role definition and task ex- pression to reflect from a logical consistency perspective, until the reward function is evaluated to satisfy the constraints. The constraints include whether the LLM response is successful, whether the output is in a valid JSON structure, and whether the return type of the reward function is correct. Exploration of Reward Factors: As shown in Fig. 3, our proposed LLM-assisted reward function (i.e., reward = (w1× energy +w2×position )×Penalty ), compared to the manu- 7 ally designed function (i.e., reward =w×energy ×Penalty ), further considers the UA V’s selection of more optimal posi- tions to contribute to minimizing the total energy. This reward factor helps reduce propulsion energy by encouraging the UA V to stay closer to the center of the sensor distribution to reduce flight distance and travel time. Additionally, being near the center can indirectly save energy due to faster data transmission and shorter collection periods, which reduce the time the UA V needs to hover and communicate. C. Performance Evaluation We validate the superiority of using LLMs as reward designers for RL in the LAENet. Two DRL algorithms, DDPG and TD3, are adopted to conduct the simulations. The actor and critic networks are trained with learning rates of 10−4 and 3×10−4, respectively. A batch size of 64, training episodes of 200, and a discount factor of 0.99 are set. The simulation environment consists of a 300m ×300m square area representing a marine IoT coverage zone, where 10 IoT terminals are randomly deployed. The wireless channel between the UA V and the terminals follows the Rician fading model. We use the reward design method from previous work [3] as a baseline, namely manually designed reward functions. As shown in Fig. 3, the manually designed reward function includes energy-related reward terms and penalty terms. In
https://arxiv.org/abs/2505.21045v1
contrast, we employ GPT-4o as the LLM module to design the reward function, which incorporates richer reward factors based on the position of the UA V . Fig. 4 shows the convergence performance of DDPG and TD3 algorithms using manually designed and LLM-generated reward functions. It can be observed that algorithms with LLM-designed rewards consistently outperform their manually designed functions in reducing energy consumption, with TD3 algorithm achieving up to 7.2% lower final energy consumption. The reward structure designed by the LLM encourages the UA V to select more efficient trajectories and reduce flight and communication overhead. This improvement can be attributed to the LLM’s ability to incorporate high-level reasoning and task-specific context when generating reward functions. Fig. 5 shows the impact of varying packet sizes on energy consumption. As packet size increases from 2.0 to 2.8 Mbits, overall energy consumption also rises, due to the prolonged data collection and transmission periods required for larger packets. It is evident that algorithms guided by LLM-designed reward functions consistently outperform those using manually crafted rewards, especially achieving up to 6.2% lower energy consumption at the 2.0 Mbits packet size, which shows the effectiveness of our LLM-guided reward design in optimizing UA V decision-making and reducing system energy overhead. V. C ONCLUSION AND FUTURE WORK In this paper, we have explored the integration of LLMs into RL to address key challenges in the LAENet. By leveraging the strengths of LLMs, we have proposed an LLM-enhanced RL framework to mitigate limitations throughout the entire pipeline of classical RL. Finally, we have presented a casestudy to demonstrate the effectiveness of using LLMs for designing reward functions. The promising directions from this work include the development of modular LLM-RL agents with specialized capabilities, such as planning, memory, tool use, and retrieval-augmented reasoning, to enable more adaptive and context-aware decision-making. Furthermore, in multi-agent RL scenarios, multiple collaborative LLMs can assume complementary roles, opening up new possibilities for addressing complex and dynamic tasks in heterogeneous, resource-constrained environments. Advancing these direc- tions should be critical for realizing intelligent, efficient, and scalable aerial networking systems in the LAENet and other real-world applications. REFERENCES [1] Z. Li et al. , “Unauthorized UA V countermeasure for low-altitude econ- omy: Joint communications and jamming based on MIMO cellular systems,” IEEE Internet Things J. , vol. 12, no. 6, pp. 6659–6672, 2025. [2] Q. Wei et al. , “Multi-UA V-enabled energy-efficient data delivery for low-altitude economy: Joint coded caching, user grouping, and UA V deployment,” IEEE Internet Things J. , pp. 1–1, 2025. [3] O. S. Oubbati et al. , “Synchronizing UA V teams for timely data collection and energy transfer by deep reinforcement learning,” IEEE Trans. Veh. Technol. , vol. 71, no. 6, pp. 6682–6697, 2022. [4] Y . Cao et al. , “Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods,” IEEE Trans. Neural Netw. Learn. Syst. , pp. 1–21, 2024. [5] R. Zhang et al. , “Generative AI agents with large language model for satellite networks via a mixture of experts transmission,” IEEE J. Sel. Areas Commun. , vol. 42, no.
https://arxiv.org/abs/2505.21045v1
12, pp. 3581–3596, 2024. [6] J. Wei et al. , “Chain-of-thought prompting elicits reasoning in large language models,” in Proc. NeurIPS , vol. 35, 2022, pp. 24 824–24 837. [7] M. Kwon et al. , “Reward design with language models,” in Proc. ICLR , 2023. [8] F. Paischer et al. , “History compression via language models in rein- forcement learning,” in Proc. ICML , 2022, pp. 17 156–17 185. [9] M. Dalal et al. , “Plan-seq-learn: Language model guided RL for solving long horizon robotics tasks,” in Proc. ICLR , 2024. [10] R. Shi et al. , “Unleashing the power of pre-trained language models for offline reinforcement learning,” in Proc. ICLR , 2024. [11] N. Shinn et al. , “Reflexion: language agents with verbal reinforcement learning,” in Proc. NeurIPS , vol. 36, 2023, pp. 8634–8652. [12] X. Yang et al. , “Rl2: Reinforce large language model to assist safe reinforcement learning for energy management of active distribution networks,” arXiv preprint arXiv:2412.01303 , 2024. [13] L. Chen et al. , “Rlingua: Improving reinforcement learning sample efficiency in robotic manipulations with large language models,” IEEE Robot. Autom. Lett. , vol. 9, no. 7, pp. 6075–6082, 2024. [14] Y . Han et al. , “Large language model guided reinforcement learning based six-degree-of-freedom flight control,” IEEE Access , vol. 12, pp. 89 479–89 492, 2024. [15] K. Santu et al. , “TELeR: A general taxonomy of LLM prompts for benchmarking complex tasks,” in Findings Assoc. Comput. Linguist.: EMNLP , 2023, pp. 14 197–14 203.
https://arxiv.org/abs/2505.21045v1
A domain adaptation neural network for digital twin-supported fault diagnosis Zhenling Chen CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, FranceHaiwei Fu CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, 91190, France Zhiguo Zeng Chair on Risk and Resilience of Complex Systems, Laboratoie Genie Industriel, CentraleSupélec, Université Paris-Saclay, 91190, Gif-sur-Yvette, France Abstract —Digital twins offer a promising solution to the lack of sufficient labeled data in deep learning-based fault diagnosis by generating simulated data for model training. However, discrepancies between simulation and real-world systems can lead to a significant drop in performance when models are applied in real scenarios. To address this issue, we propose a fault diagnosis framework based on Domain-Adversarial Neu- ral Networks (DANN), which enables knowledge transfer from simulated (source domain) to real-world (target domain) data. We evaluate the proposed framework using a publicly available robotics fault diagnosis dataset, which includes 3,600 sequences generated by a digital twin model and 90 real sequences collected from physical systems. The DANN method is compared with commonly used lightweight deep learning models such as CNN, TCN, Transformer, and LSTM. Experimental results show that incorporating domain adaptation significantly improves the diag- nostic performance. For example, applying DANN to a baseline CNN model improves its accuracy from 70.00% to 80.22% on real-world test data, demonstrating the effectiveness of domain adaptation in bridging the sim-to-real gap.1 Index Terms —predictive maintenance, fault diagnosis, digital failure twin, domain adaptation neural network (DANN) I. I NTRODUCTION Fault diagnosis aims at identifying the cause of a failure from observational data from sensors [1]. One of the major challenge in fault diagnosis is that the state-of-the-art deep learning-based models often require large amount of data. It is, however, often difficult to obtain these data in practice [2]. Digital twin technology combines physical entity with its digital representation. It can accurately reproduce the scenes in the physical world in the virtual environment, providing great convenience for the analysis, optimization and control of physical system [3]. Using digital twins to generate sim- ulated failure data and train a deep learning model for fault diagnosis has become a promising approach to solve the data insufficiency issue of fault diagnosis. There are already some existing works in applying dig- ital twins for fault diagnosis. For example, Jain et al. [4] proposed a digital twin-based fault diagnosis framework that 1Code and datasets available at: https://github.com/JialingRichard/Digital- Twin-Fault-Diagnosisutilizes the digital twin model to simulate system behavior and identify fault patterns in distributed photovoltaic systems, Wang et al. [5] proposed a digital twin-based fault diagnosis framework that integrates sensor data and physical models to detect and diagnose faults in rotating machinery within smart manufacturing systems. Yang et al.[6] proposed a digital twin-driven fault diagnosis method that combines virtual and real data to diagnose composite faults, where the digital twin generates virtual samples to compensate for the scarcity of fault samples in real systems. Most of these existing works assume that condition-monitoring data are availalbe on the same level as the component being diagnosed. In practice, however, deploying sensors at the component level is often difficult. One has to rely on system-level condition-monitoring data to infer
https://arxiv.org/abs/2505.21046v1
the component-level failure modes [7]. In one of our previous works [8], we developed a digital twin model of a robot and use it to generate simulated failure data for fault diagnosis. Testing data are collected from a real robot with different injected failures to test the performance of the developed model. The existing works share a common assumption: The digital twin model can accurately predict the actual behavior of the component under test. However, in practice, the digital twin model is not always accurate. Then, the fault diagnosis model trained on simulation data often suffers from poor performance when applied to real data, due to the imprecision of the simulation model. To address this issue, we propose a Domain Adversarial Neural Network (DANN)-based framework for digital twin-supported fault diagnosis. Through the DANN [9], the developed model is able to learn useful features from the simulated data even the simulation does not exactly match the reality. We also performed a benchmark study by comparing the performance of the developed model with other state-of-the-art deep learning models, including LSTM [10], Transformer [11], CNN [12] and TCN [13]. The main contributions of this paper are: •We propose a novel DANN-based framework for digital twin-supported fault diagnosis.arXiv:2505.21046v1 [cs.LG] 27 May 2025 •We present an open-source dataset for digital twin- supported fault diagnosis. The dataset include simulated training data and real test data. •We conducted a detailed benchmark study where the performance of the developed model is compared with four other state-of-the-art deep learning models. II. D IGITAL TWIN MODEL AND DATASET DESCRIPTION In this paper, we consider the open source dataset for digital twin-supported fault diagnosis we developed previously in [8]. The dataset is created based on the digital failure twin model of a robot, as shown in Fig. 1. Fig. 1: The fault diagnosis in digital twin for robot [8]. A digital twin model is a simulation model used to simulate the failure behavior of the robot and connect to the physical entity to reflect its real-time states. The robot comprises of six motors. We monitor the trajectory of the end-effector and the control commands of each motor. The goal of the fault diagnosis is to use the condition-monitoring data to infer the failure modes of the four out of the six motors. Each motor might subject to two failure modes, i.e., stuck and steady-state error. The digital failure twin model is built as a two-layer model. On the motor level, we model the dynamics of the motor and its controller. Then, the response of each motor is fed into a forward kinematics model, which allows simulating the end-effector trajectory from the postions of the motors. The stuck and steady-state error can be simulated by changing the response of each motor, as shown in Fig. 1. To generate the training dataset, we generate 400 random trajectories, and simulate the 9classes (one normal state and eight failure states where each motor could be in either one of the two failure modes) under each trajectory. Each sample contains records spanning 1000 time steps. Then, we collect
https://arxiv.org/abs/2505.21046v1
test data by randomly simulate 90 trajectories following the same protocals. In the original work [8], an LSTM was trained on the simulation dataset and applied to dignose the failures on the real robot. The results showed that, although the trained model performed well on the validation set (seperated from training data, but still from simulation), it performs poorly on the real testing dataset ( 96% V .S.69%). The main reason is that the simulation model does not match exactly the behavior of thereal robot. In this paper, we intend to address this issue through transfer learning. III. E XISTING TRANSFER LEARNING MODELS Prevalent deep learning-based models show great success in both academia and industry [14]. For example, Convolutional Neural Networks (CNN) use in automated fault detection for machinery vibrations [15], Recurrent Neural Networks (RNN) for example, LSTM have proven useful in diagnosing faults based on time series data [16]. In current work, Plakias et al. combined the dense convolutional blocks and the attention mechanism to develop a new attentive dense CNN for fault diagnosis [17]. Although these methods can achieve high per- formance in fault diagnosis, the application of these methods is usually under the assumption that test data and train data come from the same data distribution. Also, the current deep learning-based models are under the Independent Identically Distribution (i.i.d.). As we discussed before, the data generated from a digital twin might not exactly match the actual behavior in the physical entity. As a result, the distribution of training and testing dataset cannot be assumed as i.d.d., due to steady-state errors that cause in friction or other mechanical effects and real time faults that can big impact the results. In this paper, we use transfer learning methods to deal with source domain and target domain alignment in digital twin in data distribution. To solve the issue of data distribution discrepancy, various domain adaptation techniques in transfer learning have been introduced for diagnosing bearing faults [18–20]. Transfer learning can also be used to learn knowledge from source domain for fault diagnosis on a different target domain. Applications of transfer learning in fault diagnosis include representation adaptation [21–24], parameter transfer [25–27], adversarial-based domain adaptation [28, 29]. One of the most often used domain adaptation methods is representation adaptation which to align the distribution of the representations from the source domain and target domain by reducing the distribution discrepancy. Some neural networks are build for this, such as feature-based transfer neural network (FTNN) [24], deep convolutional transfer learning network (DCTLN) [21]. Shao et al. proposed a CNN-based machine fault diagnosis framework in parameter transfer [27], and experimental results show that DCTLN can get the average accuracy of 86.3%. Experimental results illustrate that the proposed method can achieve the test accuracy near 100% on three mechanical datasets, and in the gearbox dataset, the accuracy can reach 99.64%. In adversarial-based domain adaptation, Cheng et al. pro- posed Wasserstein distance based deep transfer learning (WD- DTL) [28] which uses CNN as pre-trained model. Experimen- tal results show that the transfer accuracy of WD-DTL can reach 95.75%
https://arxiv.org/abs/2505.21046v1
on average. Lu et al. develop a domain adapta- tion combined with deep convolutional generative adversarial network (DADCGAN)-based methodology for diagnosing DC arc faults [29]. DADCGAN is a robust and reliable fault diagnosis scheme based on a lightweight CNN-based classifier can be achieved for the target domain. In this paper, we choose the DANN architecture to de- velop a framework of digital twin-supported fault diagnosis. The main reason is that its architecture is simple and can efficiently capture the features from the source domain and generalize well on the target domain. Moreover, DANN’s adversarial training mechanism enables the model to learn domain-invariant features, making it particularly effective in reducing the distribution discrepancy between source and tar- get domains. Furthermore, DANN performs well with limited labeled data from the target domain, addressing the common challenge of insufficient fault data in practical applications. Its ability to handle complex and nonlinear relationships in data and make DANN a reliable and scalable solution for fault diagnosis. IV. DANN M ODEL ARCHITECTURE We use Domain Adversarial Neural Network (DANN) model [9] and extend its application in digital twin in robotics maintenance prediction that previously and originally utilize in transfer learning in domain adaptation. The architecture of DANN is shown in Figure 2. Let us assume the input samples are represented by x∈X, where Xis some input space and certain labels (output) yfrom the label space Y. We assume that there exist two distributions S(x, y)andT(x, y)onX⊗Y, which will be referred to as the source domain and the target domain. Our goal is to predict labels ygiven the input xfor the target domain. We denote with dithe binary variable (domain label) for theith example, which indicates whether xicome from the source domain ( xi∼S(x) if di=0) or from the target distribution ( xi∼T(x) if di=1). We assume that the input x is first representative by a representation learning Gf(Feature Extractor) to a d-dimensional feature vector f ∈Rd, and we denote the vector of parameters of all layers in this mapping as θf,f=Gf(x;θf). Then, the feature vector fis representative byGy(label predictor) to the label y, and we denote the parameters of this learning with θy. Finally, the same feature vector fis representative to the domain label dby a mapping Gd(domain classifier) with the parameters θd. For the model learning, we minimize the label prediction loss on the annotated part (i.e. the source part) of the train set, also the parameters of both the feature extractor and the label predictor are optimized in order to minimize the empirical loss for the source domain samples. This ensures the discriminativeness of the features fand the overall good prediction performance of the combination of the feature extractor and the label predictor on the source domain. By doing so, we make the features fdomain-invariant. We need to make the distributions S(f)=Gf(x;θf)| x∼S(x) and T(f)=Gf(x;θf)| x∼T(x) to be similar [30]. To Measure the dissimilarity of the distributions S(f)and T(f), the distributions are constantly changing in learning progresses, we estimate the dissimilarity is to look at the loss of the domain classifier Gd, provided that the
https://arxiv.org/abs/2505.21046v1
parametersθdof the domain classifier have been trained to discriminate between the two feature distributions. In training to obtain domain-invariant features, we seek the parameters θfof the feature representative that maximize the loss of the domain classifier (by making the two feature distributions as similar as possible), and simultaneously seeking the parameters θdof the domain classifier that minimize the loss of the domain classifier. And we seek to minimize the loss of the label predictor. The function is: E(θf, θy, θd) =X i=1..NLy(Gy(Gf(xi;θf);θy), yi)− λX i=1..NLd(Gd(Gf(xi;θf);θd), yi) =X i=1..NLi y(θf, θy)−λX i=1..NLi d(θf, θd)(1) where Lyis the loss for label prediction, Ldis the loss for the domain classification, and Li y,Li ddenote the corresponding loss functions evaluated at the i training example. We seek the the parameters ˆθf,ˆθy,ˆθdby solving the following optimization problem: (ˆθf,ˆθy) =argmin θf,θyE(θf, θy,ˆθd) ˆθd=argmax θdE(ˆθf,ˆθy, θd)(2) Then, we do optimization in backpropagation to seek the parameters ˆθf,ˆθy,ˆθdat the end progressing of class classifier and domain classifier. We also do a gradient reversal layer (GRL) to update and diferences the −λfacotor in (1). The backpropagation processing passes through the GRL, by the partial derivatives of the loss that is downstream the GRL (i.e. Ld) w.r.t. the layer parameters that are upstream the GRL (i.e. θf) get multiplied by −λ(i.e.∂Ld ∂θfis effectively replaced with −λ∂Ld ∂θf). We have forward and backward function Rλ(x): Rλ(x) =x (3) dRλ dx=−λI (4) where I is the identity matrix. In feature extractor, we use CNN to do feature extracting, based on our baseline, CNN model has a better results, so we use CNN architecture and its representation to do feature extracting. The CNN is in two convolutional layers, and we set kernel size is 3, number of filters is 64. V. E XPERIMENTS A. Dataset In this case study, we work on the dataset originally reported in [8]. As [8], we retained the desired and realized trajectory coordinates (x, y, z) and introduced a derived feature set representing the residuals between the desired and realized trajectories. As a result, the final feature set comprises six features: the desired trajectory coordinates (x, y, z) and the corresponding residuals (x, y, z). Fig. 2: DANN Architecture [9] The source domain dataset generated by the digital twin consists of 3600 samples across 9 distinct labels, with each label containing 400 samples. The real-world measurements are treated as target domain. We have 90 samples in the target domain. We split the source domain dataset into training and validation sets with a 9to1ratio, and the target domain dataset is used as the test set. The DANN described in Sect. IV is used to train a fault diagnosis model using the source domain data. Only the measured features in the target domain, but not the labels are used in the training process of the DANN to learn the domain invariate features. Then, the trained DANN is applied to predict the failure labels of the target domain. B. Evaluation Metrics The performance of all methods is evaluated using Accu- racy andF1 Score , which are defined as follows: a) Accuracy: Accuracy =Number
https://arxiv.org/abs/2505.21046v1
of Correct Predictions Total Number of Predictions =TP+TN TP+TN+FP+FN(5) where TP,TN,FP, and FN represent the number of true positives, true negatives, false positives, and false negatives, respectively. b) F1 Score: The F1 Score is the harmonic mean of precision and recall: F1 Score = 2·Precision ·Recall Precision +Recall(6)where: Precision =TP TP+FP,Recall =TP TP+FN(7) These metrics provide a balanced evaluation of the model’s performance. C. Benchmarked models We use four current prevalent deep learning methods and models as baseline: •LSTM [10] Long Short-Term Memory (LSTM) deals with time series data in deep learning, it often uses for preventing gradient vanishing and gradient explosion in deep learning. LSTM is a special type of recurrent neural network (RNN), and can effectively capture and process long-term dependencies in sequence data by introducing memory units and gating mechanisms. •Transformer [11] Transformer is better at context depen- dency. And it is very versatile especially in multimodal. This ability to dynamically focus on relevant parts of the input is a key reason why Transformer model excel when processing sequence data. •CNN [12] Convolutional Neural Networks (CNN) is mainly used as a visual neural network, which mainly extracts features layer by layer through multiple and deep convolution. •TCN [13] It is a deep learning model specifically de- signed to process sequential data, combining the parallel processing capabilities of convolutional neural networks (CNN) with the long-term dependent modeling capabili- ties of recurrent neural networks (RNN). D. Implementation Details The implementation of the DANN is carried out using PyTorch. The experiments are conducted on NVIDIA RTX 3060 GPU with the following parameter settings: Learning rate is 0.001, Batch size is 32, Number of epochs is 250, Optimizer is Adam, and Alpha: α=2 1 +e−10p−1 (8) where p=epoch max epoch(9) VI. R ESULTS AND DISCUSSIONS A. Average accuracy and F1 score over all methods In this subsection, we systematically compare the results from the DANN with the four benchmarked models. We conduct experiments to evaluate the accuracy of the models on the train set, validation set, and real test set, as shown in table I. Additionally, we record the F1-score for each one of the nine classes, as shown in table II. Due to the randomness of deep learning models, each experiment is conducted five times, and both the average values and standard deviations of the performance metrics are calculated. From Table I, it can be seen that the four benchmarked deep learning models do not perform well, especially on the test set. The performance on the test set drops significantly compared to the training set and validation set. This can be explained by the imprecision of the simulation model used to generate the training data. The DANN, on the other hand, achieve much better performance on the test set. This is because through domain adaptation, the DANN is able to extract domain invariate features and generalize them to the target domain. It is observed from Table II that most of the benchmarked models exhibit very low classification accuracy for the state healthy. This is because, healthy state is very similar to other
https://arxiv.org/abs/2505.21046v1
states where one motor has steady-state errors. When the simulation model is not accurate, the generated training data are even more difficult to distinguish between healthy and steady-state error states. The DANN, on the other hand, performs well in classifying the state of healthy. This is because after the domain adaptation, in the extracted feature space, the healthy state becomes well-seperated with the other states. In summary, among the commonly used deep learning models in our experiments, the model that combines a deeper and wider CNN as the backbone with the DANN structure is the relatively optimal choice.B. Ablation study for Digital Twin To demonstrate the necessity of using a digital twin model for this task, we conduct an ablation experiment. We train the model using only the real test set, excluding the train and validation sets generated entirely by the digital twin model. In the real test data, we split the dataset into train and testing sets at a ratio of 7:3. Our dataset contains only 90 real data points, and it is clear that most deep learning models struggle to fit on such a small dataset. The results we recorded in Table III, which indicate that, with such a limited amount of data, common methods cannot make accurate predictions. Use digital twin model to generate simulation data, on the other hand, clearly improve the performance, as the generated simulation data help the deep learning model to better learn the relevant features. VII. C ONCLUSIONS AND FUTURE WORKS In this paper, we proposed a new deep learning baseline for fault diagnosis using an existing digital twin dataset. We applied commonly used lightweight deep learning models and demonstrated that the Domain-Adversarial Neural Net- work (DANN) approach with a CNN backbone, as a transfer learning method, achieves higher accuracy compared to other models. Furthermore, our experiments validate that combining digital twin simulation with domain adaptation techniques can effectively address the issue of limited real-world data in fault diagnosis tasks. We selected lightweight models such as CNN, TCN, Trans- former, and LSTM due to their wide adoption in time-series fault diagnosis, ease of training, and relatively low computa- tional cost. Although these models serve as strong baselines, we acknowledge that more advanced architectures—such as pre-trained large-scale models or graph-based neural net- works—may offer improved generalization and performance. Exploring these alternatives remains a promising direction for future research. However, several limitations remain. First, the DANN framework requires more computational resources and deep learning expertise, which may pose challenges for practical deployment, particularly in resource-constrained industrial set- tings. Second, the inevitable discrepancies between the digital twin and the real-world system limit the performance of the model, as current simulations cannot fully capture complex physical dynamics. Third, while DANN improves generaliza- tion, the deep learning models used in this study still have room for improvement. Future work could explore more robust and generalizable models, such as those pre-trained on large- scale datasets or more advanced domain adaptation methods. ACKNOWLEDGMENT The research of Zhiguo Zeng is supported by ANR-22- CE10-0004, and chair of Risk and Resilience of Complex
https://arxiv.org/abs/2505.21046v1
Systems (Chaire EDF, Orange and SNCF). Haiwei Fu and Zhenling Chen participate in this project as lab project in their master curricum in Centralesupélec. The authors would like to thank Dr. Myriam Tami for managing this project. TABLE I: Performance Comparison of Baseline Models Model Training Accuracy (%) Validation Accuracy (%) Test Accuracy (%) LSTM 96.06±5.57 92.22±4.60 56.00±4.59 Transformer 97.73±0.33 75.94±1.52 48.44±2.29 TCN 87.96±0.86 67.67±0.65 44.22±1.63 CNN 99.94±0.11 96.78±0.76 70.00±1.99 DANN 99.29±0.67 95.28±0.72 80.22±1.78 TABLE II: Performance Comparison on Each Category (F1 Score) LSTM Transformer TCN CNN DANN Healthy 0.00±0.00 0.00±0.00 0.07±0.09 0.07±0.09 0.67±0.04 Motor 1 Stuck 0.86±0.06 0.63±0.05 0.65±0.04 0.81±0.03 0.84±0.04 Motor 1 Steady state error 0.55±0.14 0.67±0.09 0.46±0.04 0.85±0.03 0.90±0.05 Motor 2 Stuck 0.72±0.05 0.65±0.14 0.36±0.03 0.73±0.07 0.79±0.04 Motor 2 Steady state error 0.53±0.16 0.40±0.05 0.46±0.08 0.90±0.05 0.87±0.02 Motor 3 Stuck 0.55±0.05 0.54±0.08 0.48±0.05 0.63±0.09 0.80±0.03 Motor 3 Steady state error 0.63±0.11 0.38±0.10 0.62±0.10 0.91±0.03 0.91±0.06 Motor 4 Stuck 0.49±0.06 0.42±0.08 0.40±0.07 0.59±0.06 0.78±0.04 Motor 4 Steady state error 0.43±0.05 0.41±0.07 0.28±0.02 0.53±0.02 0.62±0.08 TABLE III: Performance Ablation Study Model Only Real Data Accuracy (%) Digital twin-supported deep learning (%) LSTM 14.92±4.09 56.00±4.59 Transformer 18.10±2.58 48.44±2.29 TCN 15.24±1.62 44.22±1.63 CNN 13.97±2.54 70.00±1.99 DANN 15.87±4.71 80.22±1.78 REFERENCES [1] Y . Zhang, J. Ji, Z. Ren, Q. Ni, F. Gu, K. Feng, K. Yu, J. Ge, Z. Lei, and Z. Liu, “Digital twin-driven partial domain adaptation network for intelligent fault diagnosis of rolling bearing,” Reliability Engineering & System Safety , vol. 234, p. 109186, 2023. [2] D. Zhong, Z. Xia, Y . Zhu, and J. Duan, “Overview of predictive maintenance based on digital twin technology,” Heliyon , vol. 9, no. 4, 2023. [3] M. G. Juarez, V . J. Botti, and A. S. Giret, “Digital twins: Review and challenges,” Journal of Computing and Information Science in Engineering , vol. 21, no. 3, p. 030802, 2021. [4] P. Jain, J. Poon, J. P. Singh, C. Spanos, S. R. Sanders, and S. K. Panda, “A digital twin approach for fault diagnosis in distributed photovoltaic systems,” IEEE Transactions on Power Electronics , vol. 35, no. 1, pp. 940–956, 2019. [5] J. Wang, L. Ye, R. X. Gao, C. Li, and L. Zhang, “Digital twin for rotating machinery fault diagnosis in smart manufacturing,” International Journal of Production Research , vol. 57, no. 12, pp. 3920–3934, 2019. [6] C. Yang, B. Cai, Q. Wu, C. Wang, W. Ge, Z. Hu, W. Zhu, L. Zhang, and L. Wang, “Digital twin-driven fault diagnosis method for composite faults by combining virtual and real data,” Journal of Industrial Information Integration , vol. 33, p. 100469, 2023. [7] Y . Ran, X. Zhou, P. Lin, Y . Wen, and R. Deng, “A survey of predictive maintenance: Systems, purposes and approaches,” arXiv preprint arXiv:1912.07383 , pp. 1–36, 2019. [8] K. M. Court, X. M. Court, S. Du, and Z. Zeng, “Use digital twins to sup- port fault diagnosis from system-level condition-monitoring data,” arXiv preprint arXiv:2411.01360 , 2024. [9] Y . Ganin and V . Lempitsky, “Unsupervised domain adaptation by backpropagation,” inInternational conference on machine learning , pp. 1180–1189, PMLR, 2015. [10] S. Hochreiter,
https://arxiv.org/abs/2505.21046v1
“Long short-term memory,” Neural Computation MIT-Press , 1997. [11] A. Vaswani, “Attention is all you need,” Advances in Neural Information Processing Systems , 2017. [12] Y . LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Handwritten digit recognition with a back-propagation network,” Advances in neural information processing systems , vol. 2, 1989. [13] S. Bai, J. Z. Kolter, and V . Koltun, “An empirical evaluation of generic convolutional and recurrent networks for sequence modeling,” arXiv preprint arXiv:1803.01271 , 2018. [14] M. He and D. He, “Deep learning based approach for bearing fault diagnosis,” IEEE Transactions on Industry Applications , vol. 53, no. 3, pp. 3057–3065, 2017. [15] M. Xia, T. Li, L. Xu, L. Liu, and C. W. De Silva, “Fault diagnosis for rotating machinery using multiple sensors and convolutional neural networks,” IEEE/ASME transactions on mechatronics , vol. 23, no. 1, pp. 101–110, 2017. [16] J. Shi, D. Peng, Z. Peng, Z. Zhang, K. Goebel, and D. Wu, “Planetary gearbox fault diagnosis using bidirectional-convolutional lstm networks,” Mechanical Systems and Signal Processing , vol. 162, p. 107996, 2022.[17] S. Plakias and Y . S. Boutalis, “Fault detection and identification of rolling element bearings with attentive dense cnn,” Neurocomputing , vol. 405, pp. 208–217, 2020. [18] W. Li, R. Huang, J. Li, Y . Liao, Z. Chen, G. He, R. Yan, and K. Gryllias, “A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges,” Mechanical Systems and Signal Processing , vol. 167, p. 108487, 2022. [19] H. Zhiyi, S. Haidong, J. Lin, C. Junsheng, and Y . Yu, “Transfer fault diagnosis of bearing installed in different machines using enhanced deep auto-encoder,” Measurement , vol. 152, p. 107393, 2020. [20] H. Cao, H. Shao, X. Zhong, Q. Deng, X. Yang, and J. Xuan, “Unsupervised domain- share cnn for machine fault transfer diagnosis from steady speeds to time-varying speeds,” Journal of Manufacturing Systems , vol. 62, pp. 186–198, 2022. [21] L. Guo, Y . Lei, S. Xing, T. Yan, and N. Li, “Deep convolutional transfer learning network: A new method for intelligent fault diagnosis of machines with unlabeled data,” IEEE Transactions on Industrial Electronics , vol. 66, no. 9, pp. 7316–7325, 2018. [22] S. Pang and X. Yang, “A cross-domain stacked denoising autoencoders for rotating machinery fault diagnosis under different working conditions,” Ieee Access , vol. 7, pp. 77277–77292, 2019. [23] D. Xiao, Y . Huang, L. Zhao, C. Qin, H. Shi, and C. Liu, “Domain adaptive motor fault diagnosis using deep transfer learning,” Ieee Access , vol. 7, pp. 80937–80949, 2019. [24] B. Yang, Y . Lei, F. Jia, and S. Xing, “An intelligent fault diagnosis approach based on transfer learning from laboratory bearings to locomotive bearings,” Mechanical Systems and Signal Processing , vol. 122, pp. 692–706, 2019. [25] Z. He, H. Shao, X. Zhang, J. Cheng, and Y . Yang, “Improved deep transfer auto- encoder for fault diagnosis of gearbox under variable working conditions with small training samples,” Ieee Access , vol. 7, pp. 115368–115377, 2019. [26] H. Kim and B.
https://arxiv.org/abs/2505.21046v1
D. Youn, “A new parameter repurposing method for parameter transfer with small dataset and its application in fault diagnosis of rolling element bearings,” Ieee Access , vol. 7, pp. 46917–46930, 2019. [27] S. Shao, S. McAleer, R. Yan, and P. Baldi, “Highly accurate machine fault diagnosis using deep transfer learning,” IEEE Transactions on Industrial Informatics , vol. 15, no. 4, pp. 2446–2455, 2018. [28] C. Cheng, B. Zhou, G. Ma, D. Wu, and Y . Yuan, “Wasserstein distance based deep adversarial transfer learning for intelligent fault diagnosis with unlabeled or insufficient labeled data,” Neurocomputing , vol. 409, pp. 35–45, 2020. [29] S. Lu, T. Sirojan, B. T. Phung, D. Zhang, and E. Ambikairajah, “Da-dcgan: An effective methodology for dc series arc fault diagnosis in photovoltaic systems,” IEEE Access , vol. 7, pp. 45831–45840, 2019. [30] H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of statistical planning and inference , vol. 90, no. 2, pp. 227–244, 2000.
https://arxiv.org/abs/2505.21046v1
arXiv:2505.21055v1 [cs.AI] 27 May 2025Agent-Environment Alignment via Automated Interface Generation Kaiming Liu1, Xuanyu Lei1,2, Ziyue Wang1, Peng Li2∗, Yang Liu1,2∗ 1Department of Computer Science and Technology, Tsinghua Un iversity, Beijing, China 2Institute for AI Industry Research (AIR), Tsinghua Univers ity, Beijing, China Abstract Large language model (LLM) agents have shown impressive rea soning capabil- ities in interactive decision-making tasks. These agents i nteract with environ- ment through intermediate interfaces, such as predefined ac tion spaces and in- teraction rules, which mediate the perception and action. H owever, mismatches often happen between the internal expectations of the agent regarding the influ- ence of its issued actions and the actual state transitions i n the environment, a phenomenon referred to as agent-environment misalignment . While prior work has invested substantially in improving agent strategies a nd environment design, the critical role of the interface still remains underexplo red. In this work, we empirically demonstrate that agent-environment misalign ment poses a significant bottleneck to agent performance. To mitigate this issue, we propose ALIGN , an Auto-Al igned I nterface G eneration framework that alleviates the misalignment by enriching the interface. Specifically, the ALIGN-genera ted interface enhances both the static information of the environment and the step- wise observations re- turned to the agent. Implemented as a lightweight wrapper, t his interface achieves the alignment without modifying either the agent logic or th e environment code. Experiments across multiple domains including embodied ta sks, web navigation and tool-use, show consistent performance improvements, w ith up to a 45.67% success rate improvement observed in ALFWorld. Meanwhile, ALIGN-generated interface can generalize across different agent architect ures and LLM backbones without interface regeneration. Code and experimental res ults are available at https://github.com/THUNLP-MT/ALIGN . OBSERVATION: Nothing happens. THOUGHT: The book may be on the shelf. First I need to examine the shelves. ACTION: examine shelf 1 Agent-Environment Interaction Agent-Environment Misalignment THOUGHT: The environ- ment returned “Nothing happens.” This means the book not on shelf 1. Agent’s Inner Monologue The agent has not gone to shelf 1. Action `examine shelf 1` is invalid. Nothing happens in environment. Environment Logic ALIGN-generated Interface Our Method: Interface Alignment Interaction between Agent and Interface-wrapped Env ACTION: examine shelf 1 OBSERVATION: Before `examine recepta- cle`, you should first `go to` there. THOUGHT: I need to first `go to shelf 1` and then execute `examine shelf 1`. Alignment Information Align Interaction Align Interface Function Figure 1: Illustration of agent-environment misalignment and our pr oposed solution. On the left, the agent and the environment have a misalignment in th eir interpretation of the same observa- tion, where the agent’s understanding of the observation di ffers from the environment’s underlying logic. On the right, our method, ALIGN, automatically gener ates interfaces that provide the agent with clearer interaction context, aligning the agent’s und erstanding with the environment’s logic. ∗Correspondence to Peng Li <[email protected]>, Yang Liu <[email protected]> 1 Introduction Large Language Model (LLM) agents have demonstrated promis ing performance in interactive tasks such as embodied tasks [12, 25, 44], web navigation [7, 19, 35 ], and tool-use tasks [34, 38,
https://arxiv.org/abs/2505.21055v1
47]. In these tasks, agents typically interact with the environment through manually designed interfaces such as predefined action spaces and interaction rules. Whil e substantial efforts have been devoted to improving agents and environments, comparatively littl e attention has been paid to the interface between them. This has led to a problem we term agent-environment misalignment , which signif- icantly impacts the agent performance. Agent-environment misalignment refers to the discrepancy between the interpretation of the agent to the observation following an action and the underlying lo gic of the environment. As illustrated in Figure 1 (left), in ALFWorld [40], issuing examine receptacle fails unless the agent first executes go to receptacle . Consequently, the environment responds with the observat ion “Nothing happens.”. At this point, the agent interprets the observation to mean t hat there is nothing on shelf 1, which is inconsistent with the underlying reason for the environmen t providing it. To assess the impact of this misalignment, we conduct preliminary experiments, wh ich reveal that simply revising the obser- vation for an invalid examine receptacle action to “You need to first go to receptacle before you can examine it” increases the success rate of a vanilla Qwen2.5- 7B-Instruct [42] agent on ALFWorld from 13.4% to 31.3%1. This suggests that agent-environment misalignment signi ficantly hinders task success, and can be alleviated by improving interface d esign. From the perspective of the agent, poorly designed interfaces impose unnecessary cognitive o verhead. Furthermore, from an evaluation perspective, inadequate interfaces can obscure an accurat e assessment of the true reasoning capabil- ities of agents. Therefore, we argue that the problem of agen t-environment misalignment warrants greater attention. However, addressing the agent-environment misalignment i s challenging. On one hand, current benchmarks primarily focus on advance agent intelligence b y constructing increasingly complex and challenging environments [21, 48, 50, 51, 62], often ove rlooking the importance of improving interface design. This oversight extends across multiple d omains of interactive tasks. For instance, ALFWorld, OSWorld [51], and M3ToolEval [47] all exhibit similar deficiencies: failing to p rovide agent-parseable observations for environmental constrai nts violation in embodied tasks, positional inaccuracies in operating system tasks or parameter format errors in multi-turn tool-use tasks, re- spectively. On the other hand, although some recent work [1, 54, 60] has begun to consider interface design, these efforts often rely on manual, environment-sp ecific tailoring, which introduces two crit- ical issues: (1) it is highly labor-intensive and (2) whethe r human-designed interfaces are optimal for agents remains an open question. Furthermore, in addition to studies that explicitly optimi ze interface design, it is common in agent- focused research for researchers to manually re-engineer e nvironment interfaces to align with their specific methods. For instance, for the same environment ALF World, Zhou et al. [63] manually maintains the environment’s state information in JSON form at; Ma et al. [28] introduces a new action check_valid_actions to enable agents to retrieve all valid actions; and Chen et al . [9] re-implements the environment by wrapping it into a new class InteractEnv . However, such ad-hoc
https://arxiv.org/abs/2505.21055v1
customization pose a significant challenge to the field: it compromises the d irect comparability across different approaches. Moreover, these modifications are often tailor ed to the specific methods proposed, mak- ing it difficult for the research community to determine whet her performance variations stem from novel agent architectures or from the non-standardized, cu stomized interfaces. Therefore, we be- lieve that manually re-engineering environment interface s is not an optimal approach to alleviating the agent-environment misalignment problem. Distinct from the aforementioned works, we propose to automatically generate interfaces for bridging the agent-environment misalignment . In this work, we introduce ALIGN (Auto- Aligned I nterface G eneration), a framework that automatically generate aligned interfaces for en- vironments. The generated interface consists of two module s: INFER RULES and W RAPSTEP. The former automatically discovers and provides the agent with static information about environmental rules or internal constraints, facilitating static alignment , while the latter enhances the interaction by offering more detailed observations for agent-issuing a ctions, enabling dynamic alignment , as shown in Figure 1 (right). Owing to the powerful reasoning an d coding capabilities of current ad- vanced LLMs, we utilize these models to analyze existing age nt-environment misalignments and 1Experimental details are provided in Appendix B. 2 automatically generate the interface. Additionally, we em ploy LLMs to conduct experimental veri- fication procedures to mitigate the hallucination issues [3 , 52]. Specifically, our LLM-based system autonomously validate both proposed misalignments and gen erated interface through direct interac- tion with the environment, ensuring that identified issues g enuinely exist and are properly addressed by the interface. The generated interface acts as a lightwei ght wrapper, providing richer context and explicit constraint hints, enabling different LLM agents t o align with the environment directly. To evaluate ALIGN, we conduct experiments on four represent ative benchmarks across three do- mains: embodied tasks, web navigation, and tool-use tasks. Our results demonstrate consistent performance improvements across all four benchmarks when u sing the ALIGN-generated inter- face, with notably gains of 45.67% in average success rate on ALFWorld. Moreover, the ALIGN- generated interface reduced the prevalence of consecutive invalid actions by 65% on ALFWorld, highlighting the efficiency of our approach in mitigating ag ent-environment misalignment. Our key contributions can be summarized as follows: • We identify and characterize the agent-environment misalignment problem, empirically demonstrating its prevalence across diverse domains and it s role as a significant bottleneck to agent performance. • We introduce ALIGN , the first framework automatically generates aligned inter faces to allevi- ate agent-environment misalignment, without modifying ag ent logic or environment code. • We demonstrate the effectiveness and generalizability of ALIGN across three domains, with up to a 45.67% success rate improvement on ALFWorld. 2 Related work Agent-environment interface The agent-environment interface defines how agents interac t with the environment. In reinforcement learning, researchers c onstruct unified interaction interfaces [4, 5, 22, 43] to standardize the application and evaluation of d ifferent learning algorithms. With the increasing capability of LLMs to perform human-like action s [17, 26, 28], interface design has been proven to largely
https://arxiv.org/abs/2505.21055v1
influence the performance of LLM-base d agents [51, 37]. SWE-agent [54] proposes agent-computer interfaces (ACI) for coding agent s, emphasizing interface optimization. Following this research line, recent efforts aim to improve generalization [1, 36, 32] and enhance in- terfaces with auxiliary tools [6, 16, 24, 27, 53]. Neverthel ess, current agent-environment interfaces are mostly manually crafted and tailored for specific enviro nments or agent frameworks, limiting their generalization and scalability. Therefore, we propo se automated interface generation to em- power agents with effective, generalizable and automatic i nterface alignment. Methods aligning agents with environments LLM agents have exhibited strong potential for real-world interaction and task completion [58, 39, 26]. Cu rrent research in this area can be broadly categorized into training-based methods and training-fre e methods. Training-based methods consists of fine-tuning LLMs with expert-level interaction trajecto ries [59, 8, 11, 14, 10] and enhancing environment-aligned planning and acting via reinforcemen t learning [2, 56, 35, 13, 64, 49]. Though effective, these methods suffer from high computational co sts and limited generalization towards unseen environments. Another approach constructs trainin g-free multi-agent frameworks for task decomposition and experience accumulation [9, 20, 41, 55, 6 3], offering a light-weight solution to align agents with environments. However, static agent pipe lines lack flexibility and generalization and injected experience through prompting often fails to ca pture environment dynamics and is not effectively utilized by LLMs, resulting in insufficient ali gnment between agents and environments. 3 Method 3.1 Problem formulation In the context of interactive decision-making tasks, we defi ne the environment Eas a tuple (S,A,T,F,I), where: •Sdenotes the set of all possible states of the environment; •Adenotes the action space, the set of actions the agent can inv oke; •T:S × A → S represents the state transition function, which defines how the environment state evolves in response to agent actions; 3 •F:S ×A → O is the observation function , providing textual feedback that reflects the conse- quences of the action in the current state, where Ois all possible observations; •Iencodes the environment foundational information description , a fixed, declarative representa- tion of the environment’s basic introduction, object attri butes, or domain rules, which is exposed to the agent at initialization; An agent πoperates as a policy that, at each timestep t, receives (I,task,ot−1), wheretask is the task description and ot−1=F(st−1,at−1)is the observation from the previous step, and produces an action at∈ A . In general, o0is the initial observation. The task culminates in an intera ction trajectory τ= [(s0,a0,o0),...,(st,at,ot)], and the environment provides feedback on the task completion that indicates how well the agent has achieved it s goal by the end of the interaction. In practice, misalignment can arise between the internal ex pectations of the agents and the actual transitions in the environment. After producing an action at, the agent may anticipate a transition to a statesexpected t+1 consistent with its reasoning. However, due to implicit or u nder-specified constraints inE, the actual next state sactual t+1=T(st,at)may differ from sexpected t+1 . This mismatch, which we refer to as agent-environment misalignment , can
https://arxiv.org/abs/2505.21055v1
disrupt the intended progress of the agent toward the goal to be disrupted, even if the action atis logically coherent under the agent’s interpretation of Iand prior observation. 3.2 ALIGN overview To alleviate the agent-environment misalignment, we intro duce ALIGN , a framework that automati- cally generate aligned interface between the agent and the e nvironment. Concretely, we redefine the interface by wrapping two key environment signals: (1) the s tatic environment description I, which we transform into augmented information ˜Ithat explicitly communicates relevant interaction rules and constraints to the agent before task execution; and (2) t he step-wise observation ot=F(st,at), which we restructure as an augmented observation ˜otthat captures both the original observation and additional signals about the success, failure conditio ns, or inferred preconditions of the action. Interface Agent Initialization Agent-Environment Interaction Env InferRules Info IInfo I WrapStep Action a Env Obs oAction a Obs o Figure 2: Overview of the ALIGN- generated interface.These enriched signals (˜I,˜ot)are generated without modifying the environment code , and are instead constructed by an inter- face wrapper layered on top of the environment, as illustrat ed in Figure 2. This wrapper contains two key modules: INFER RULES(·): Static information of domain-specific exe- cution rules based on the task description and the initial ob ser- vationo0. Formally, it implements a mapping: INFER RULES: (task,o0)→˜I where˜Iincludes the constraints automatically extracted, such as precondition dependencies or action ordering requireme nts. WRAPSTEP(·): A dynamic observation processor that intercepts each agen t-issued action and aug- ments the raw observation if needed. It implements the mappi ng: WRAPSTEP: (F,st,at)→˜ot where˜otencapsulates both F(st,at)and additional diagnostic or corrective information infer red from execution context. Together, these modules form an intermediate interface wra pper layer that intercepts and transforms environment information before it reaches the agent. This d esign allows the base agent πto remain unchanged, while still benefiting from contextual clarity a nd enriched observation that help avoid misaligned actions. From the perspective of the agent, inte raction now occurs with an augmented environment , which we denote as: ˜E= (S,A,T,˜F,I ∪˜I) Here, the observation function ˜Fis defined as ˜F(st,at) := WRAPSTEP(F,st,at). This formulation does not alter the internal structure or transition dynamic s of the original environment E. Instead, it constructs an externally wrapped interaction interface th at provides the agent with a richer and more interpretable view of its operating context. For the conven ience in subsequent representations, we define the interface as Φ :={INFER RULES,WRAPSTEP}. 4 Analyzer Optimizer Stage 1: Misalignment Analysis Interface t-1 Trajectory Stage 2: Interface Generation Step 1: Misalignment Candidates t Step 2: Execution Verified Misalignment tStep 1: Interface t Step 2: Interface tRefineStage 3: Execution with Interface Interface tTask Failed Task Success Verified Misalignment Example ### Analysis Result 1 Agent Action Type: examine Agent Action Case: examine drawer 1 Agent High-Level Reasoning Intent: The Agent is attempting to locate the box and desklamp by examining potential receptacles. Environment Rule: The Environment may require the Agent to first "go to" a receptacle before performing actions like "examine" on it. Sufficient Observation: The environment should provide observation
https://arxiv.org/abs/2505.21055v1
such as "You need to go to drawer 1 before examining it" when the Agent attempts to examine a receptacle without first moving to it. Experiment Verification Example Optimizer: <thought>...</thought> <action>init_simulator(task_id="4-293")</action> Experiment: ... Optimizer: <thought>Now I will simulate an invalid "examine" action where ...</thought> <action>exec_agent_action("examine drawer 1")</action> Experiment: ... Observation: Nothing happens. ... ... Optimizer: <thought>The "examine drawer 1" action was executed, but the environment did not provide sufficient feedback ...</thought> <if_need_refine>True</if_need_refine> <refine_strategy>...</refine_strategy> Interface Example # Information Align def InferRules(): return """1. Before examining or interacting with any receptacle, you must first go to that receptacle.""" # Interaction Align def WrapStep(): ... if target not in current_location: obs = f"You need to go to {target} before examining it. You must first navigate to a receptacle before you can examine it." Figure 3: ALIGN framework. In each iteration, ALIGN progresses though three stages. Stage 1 : the Analyzer identifies potential agent-environment misal ignments and validates them through ex- periments; Stage 2 : the Optimizer generates a new interface based on the previo us interface and identified misalignments, followed by verification and refin ement; Stage 3 : the agent interacts with the updated interface-wrapped environment, with trajecto ries of failed tasks fed back to the Analyzer for analysis in the next iteration. At the bottom of the figure , examples for misalignment, verifica- tion of interface integrity by Optimizer through experimen ts, and the ALIGN-generated interface are provided. As shown in Figure 3, the ALIGN integrates two cooperative mo dules, Analyzer andOptimizer to generate aligned interfaces. The framework operates thr ough iterative optimization, with each iteration comprising three stages: in Stage 1, the Analyzer identifies agent-environment misalign- ments by analyzing past interaction trajectories; in Stage 2, the Optimizer generates, validates and refines a new interface based on the detected misalignments; and in Stage 3, the agent interacts with the environment wrapped with the newly generated interface , and the failed task trajectories are fed back to Analyzer for analysis in the next iteration. 3.3 ALIGN framework Algorithm 1 ALIGN: Auto-Aligned Interface Generation Require: EnvironmentE, Agentπ, Task training setTtrain, Maximum iterations K 1: Initialize misalignment set M ← ∅ , interface Φ(0)← { INFER RULES(0),WRAPSTEP(0)}, where INFER RULES(0)and W RAPSTEP(0)are identity functions 2:fori= 1,2,...,K do 3:˜E(i−1)←EnvironmentEwrapped with interface Φ(i−1) 4:τ(i−1) fail←Failed trajectories from agent πinteracting with ˜E(i−1)onTtrain 5: ifτ(i−1) fail=∅then 6: break ⊲No more failures in the training set 7: end if // Stage 1: Misalignment Analysis 8:M(i)←Analyzer(τ(i−1) fail,M,Φ(i−1)) 9: ifM(i)=∅then 10: break ⊲No new misalignments identified 11: end if 12:M←M∪M(i) // Stage 2: Interface Generation 13:Φ(i)←Optimizer (M(i),Φ(i−1)) 14:end for 15:return final interface Φ(i) To automate the generation of interfaces that bridge the age nt-environment misalignments, ALIGN need to solve two key challenges: (1) how to analyze and ident ify existing agent-environment mis- 5 alignments, and (2) how to generate an interface that addres ses these misalignments. The overall algorithm process of ALIGN is outlined in Algorithm 1. Misalignment Analysis We represent each agent-environment misalignment using st ructured text, as shown in the bottom left of Figure 3. The “Agent High-Level Reasoning Intent” and “Environment
https://arxiv.org/abs/2505.21055v1