Dataset Viewer
Search is not available for this dataset
paper_id
string | ocr
string |
|---|---|
RT5yHR0zwp
|
PEPNet: A Lightweight Point-based Event Camera 6-DOFs Pose Relocalization Network
Anonymous authors
Paper under double-blind review
Abstract
Event cameras exhibit remarkable attributes such as high dynamic range, asynchronicity, and low latency, making them highly suitable for vision tasks that involve high-speed motion in challenging lighting conditions. These cameras implicitly capture movement and depth information in events, making them appealing sensors for Camera Pose Relocalization (CPR) tasks. Nevertheless, existing CPR networks based on events neglect the pivotal fine-grained temporal information in events, resulting in unsatisfactory performance. Moreover, the energy-efficient features are further compromised by the use of excessively complex models, hindering efficient deployment on edge devices. In this paper, we introduce PEPNet, a lightweight point-based network designed to regress six degrees of freedom (6-DOFs) event camera poses. We rethink the relationship between the event camera and CPR tasks, leveraging the raw point cloud directly as network input to harness the high-temporal resolution and inherent sparsity of events. PEPNet is adept at abstracting the spatial and implicit temporal features through hierarchical structure and explicit temporal features by Attentive Bi-directional Long Short-Term Memory (A-Bi-LSTM). By employing a carefully crafted lightweight design, PEPNet delivers state-of-the-art (SOTA) performance on public datasets with meager computational resources. Specifically, PEPNet attains a significant 38% performance improvement on the random split DAVIS 240C CPR Dataset, utilizing merely 6% of the parameters compared to traditional frame-based approaches. Moreover, the lightweight design version PEPNettiny accomplishes results comparable to the SOTA while employing a mere 0.5% of the parameters.
1 Introduction
Event cameras are a type of bio-inspired vision sensor that responds to local changes in illumination that exceed a predefined threshold [Lichtsteiner et al., 2008]. Differing from conventional frame-based cameras, event cameras independently and asynchronously emit pixel-level events. Notably, event cameras boast an exceptional triad: high dynamic range, low latency, and ultra-high temporal resolution. This unique combination empowers superior performance under challenging light conditions, adeptly capturing the swift scene and rapid motion changes in near-microsecond precision [Posch et al., 2010]. Additionally, event cameras boast remarkably low power consumption. Due to their inherent hardware attributes, event cameras have garnered significant attention in the fields of computer vision and robotics in recent years, positioning them as a popular choice for many power-constrained devices like wearable devices, mobile drones, and robots [Delbruck & Lang, 2013; Gallego et al., 2020; Mitrokhin et al., 2019]. Camera Pose Relocalization (CPR) is such an example. CPR facilitates the accurate estimation of a camera’s pose within the world coordinate system [Sünderhauf et al., 2015]. It is extensively employed in numerous applications, including Virtual Reality (VR), Augmented Reality (AR), and robotics [Shavit & Ferens, 2019].
CPR tasks using event cameras significantly diverge from their conventional CPR counterpart that employs frame-based cameras, primarily due to the inherent dissimilarity in data output mechanisms between these two camera types. Furthermore, events inherently encompass information regarding object motion and depth changes across precise temporal and spatial dimensions attributes of paramount significance within the domain of CPR tasks [Rebecq et al., 2018; Gallego...
Regrettably, existing event-based CPR networks often derive from the conventional camera network paradigms and inadequately address the unique attributes of event data. More specifically, events are transformed into various representations such as event images (Nguyen et al., 2019), time surfaces (Lin et al., 2022), and other representations (Lin et al., 2022), leading to the loss of their fine-grained temporal information. Furthermore, most event-based methods tend to overlook the computational load of the network, only prioritizing elevated accuracy, which contradicts the fundamental design principles of event cameras (Gallego et al., 2020).
Point Cloud is a collection of 3D points \((x, y, z)\) that represents the shape and surface of an object or environment and is often used in lidar and depth cameras (Guo et al., 2020). Event Cloud is a collection of events \((x, y, t, p)\) generated by event cameras, \(t\) represents timestamps and \(p\) is the polarity. By treating each event’s temporal information as the third dimension, event inputs \((x, y, t)\) can be transformed into points and aggregated into a pseudo-Point Cloud (Wang et al., 2019; Qi et al., 2017a,b). However, a direct transplantation of the Point Cloud network has not yet exhibited an amazing performance advantage in processing event data. Given that the \(t\) dimension of Event Cloud is not strictly equivalent to the spatial dimensions \((x, y, z)\), customizing the Point Cloud network becomes imperative to adequately capture the temporal information of events.
In this study, we introduce PEPNet, an innovative end-to-end CPR network designed to harness the attributes of event cameras. A comparison of our method to other event-based methods is illustrated Figure 2 in red and blue, respectively. Our main contributions are as follows: First, PEPNet directly processes the raw data obtained from the event cameras, meticulously preserving the fine-grained temporal information and the order inherent in the data. Second, PEPNet proficiently captures spatial features and implicit temporal patterns through its hierarchical structure with temporal aggregation. Additionally, it effectively incorporates explicit temporal features using A-Bi-LSTM. This architecture is tailored to accommodate the high temporal resolution and sparse characteristics inherent in event cameras. Third, PEPNet not only attains SOTA results on a public dataset (Mueggler et al., 2017) but also can be executed in real-time with a lightweight design as shown in Figure 1. Diverging from other point-based approaches in event data processing (Wang et al., 2019; Ren et al., 2023), PEPNet stands out by meticulously considering the distinction between Event Cloud and Point Cloud in its design. This thoughtful approach enables the precise extraction of spatio-temporal features and facilitates solutions for a spectrum of event-based tasks.
### 2 RELATED WORK
#### 2.1 FRAME-BASED CPR LEARNING METHODS
Deep learning, crucial for vision tasks like classification and object detection (LeCun et al., 2015), has seen advancements such as PoseNet’s innovative transfer learning (Kendall et al., 2015). Utilizing VGG, ResNet (Simonyan & Zisserman, 2014; He et al., 2016), LSTM, and customized loss functions (Walch et al., 2017; Wu et al., 2017; Naseer & Burgard, 2017), researchers enhanced this approach. Auxiliary Learning methods further improved performance (Valada et al., 2018; Radwan et al., 2018; Lin et al., 2019), although overfitting remains a challenge. Hybrid pose-based methods, combining learning with traditional pipelines (Laskar et al., 2017; Balntas et al., 2018), offer promise. DSAC series, for instance, achieve high pose estimation accuracy (Brachmann & Rother, 2021; Brachmann et al., 2017), but come with increased computational costs and latency, especially for edge devices.
#### 2.2 EVENT-BASED CPR LEARNING METHODS
Event-based CPR methods often derive from the frame-based CPR network. SP-LSTM (Nguyen et al., 2019) employed the stacked spatial LSTM networks to process event images, facilitating a real-time pose estimator. To address the inherent noise in event images, Jin et al. (2021) proposed a network structure combining denoise networks, convolutional neural networks, and LSTM, achieving good performance under complex working conditions. In contrast to the aforementioned methods, a novel representation named Reversed Window Entropy Image (RWEI) (Lin et al., 2022) is introduced, which is based on the widely used event surface (Mitrokhin et al., 2020) and serves as the input to an attention-based DSAC* pipeline (Brachmann & Rother, 2021) to achieve SOTA results. However, the computationally demanding architecture involving representation transformation and hybrid pipeline poses challenges for real-time execution. Additionally, all existing methods ignore the fine-grained temporal feature of the event cameras, and accumulate events into frames for processing, resulting in unsatisfactory performance.
### 2.3 Point Cloud Network
Point-based methodologies have transformed the direct processing of Point Cloud, with PointNet (Qi et al., 2017a) as a standout example. Taking a step beyond, PointNet++ (Qi et al., 2017b) introduced a Set Abstraction module. While it initially employed a straightforward MLP in the feature extractor, recent advancements have seen the development of more sophisticated feature extractors to enhance Point Cloud processing (Wu et al., 2019; Zhao et al., 2021; Ma et al., 2021; Dosovitskiy et al., 2020). When extending these techniques to Event Cloud, Wang et al. (Wang et al., 2019) were the first to address the temporal information processing challenge while maintaining representation in both the x and y axes, enabling gesture recognition using PointNet++. Further enhancements came with PAT (Yang et al., 2019), which incorporated self-attention and Gumbel subset sampling, leading to improved performance in recognition tasks. However, existing point-based models still fall short in performance compared to frame-based methods. This phenomenon can be attributed to the distinctively different characteristics of Point Cloud and Event Cloud. Event Cloud contradicts the permutation and transformation invariance present in Point Cloud due to its temporal nature. Additionally, the Point Cloud network is not equipped to extract explicit temporal features.
### 3 PEPNet
PEPNet pipeline consists of four essential modules: (1) a preprocessing module for the original Event Cloud, (2) a hierarchical point cloud feature extraction structure, (3) an Attentive Bi-directional LSTM, and (4) a 6-DOFs pose regressor, as illustrated in Figure 3. In the following sections, we will provide detailed descriptions and formulations for each module.
#### 3.1 Event Cloud
To preserve the fine-grained temporal information and original data distribution attributes from the Event Cloud, the 2D-spatial and 1D-temporal event information is constructed into a three-dimensional representation to be processed in Point Cloud. Event Cloud consists of time-series data capturing spatial intensity changes of images in chronological order, and an individual event is denoted as $e_k = (x_k, y_k, t_k, p_k)$, where $k$ is the index representing the $k^{th}$ element in the sequence. Consequently, the set of events within a single sequence ($\mathcal{E}$) in the dataset can be expressed as:
$$\mathcal{E} = \{e_k = (x_k, y_k, t_k, p_k) \mid k = 1, \ldots, n\}$$
For a given pose in the dataset, the ground truth resolution is limited to 5 ms, while the event resolution is 1 $\mu$s. Therefore, it is necessary to acquire the events that transpire within the time
Figure 3: PEPNet overall architecture. The input Event Cloud undergoes direct handling through a sliding window, sampling, and normalization, eliminating the need for any format conversion. Sequentially, the input passes through $S_{num}$ hierarchy structures for spatial feature abstraction and extraction. It further traverses a bidirectional LSTM for temporal feature extraction, culminating in a regressor responsible for 6-DOFs camera pose relocalization.
Period we call it sliding window corresponding to the poses, which will serve as the input for the model, as depicted by the following equation:
$$P_i = \{e_j \rightarrow l | t_l - t_j = R\} \quad i = 1, \ldots, M$$
(2)
The symbol $R$ represents the time interval of the sliding window, where $j$ and $l$ denote the start and end event index of the sequence, respectively. The variable $M$ represents the number of sliding windows into which the sequence of events $\mathcal{E}$ is divided. Before being fed into the neural network, $P_i$ also needs to undergo sampling and normalization. Sampling is to unify the number of points $N$ as network inputs. We set $N = 1024$ in PEPNet. Additionally, as the spatial coordinates are normalized by the camera’s resolution $w$ and $h$. The normalization process is described by the following equation:
$$PN_i = \left(\frac{X_i}{w}, \frac{Y_i}{h}, \frac{T_i - t_j}{t_l - t_j}\right)$$
(3)
$$X_i, Y_i, T_i = \{x_j, \ldots, x_l\}, \{y_j, \ldots, y_l\}, \{t_j, \ldots, t_l\}$$
(4)
The $X, Y$ is divided by the resolution of the event camera. To normalize $T$, we subtract the smallest timestamp $t_j$ of the window and divide it by the time difference $t_l - t_j$, where $t_l$ represents the largest timestamp within the window. After pre-processing, Event Cloud is converted into the pseudo-Point Cloud, which comprises explicit spatial information $(x, y)$ and implicit temporal information $t$.
### 3.2 Hierarchy Structure
The hierarchy structure is the backbone for processing the pseudo-3D point cloud and is composed of four primary modules: grouping and sampling, standardization, feature extractor, and aggregation, as described in the following subsection. To efficiently extract deeper explicit spatial and implicit temporal features, the hierarchical structure is tailored and differs from conventional hierarchical structure in a few ways: First, we no longer force permutation invariance as usually done in mainstream point-based methods (Qi et al., 2017a; Ma et al., 2021), as the motion information is inherently related to the sequential order of events. Instead, we keep the sequence of all events strictly in the same order as they are generated to preserve the temporal information to be used in the next stage. Second, we replace MaxPooling in aggregation and deploy temporal aggregation which leverages the attention mechanism with softmax, which improves the effective assimilation of temporal information into the resultant feature vectors.
#### 3.2.1 Grouping and Sampling
Aligned with the frame-based design concept, our focus is to capture both local and global information. Local information is acquired by leveraging Farthest Point Sampling (FPS) and K-Nearest Neighbors (KNN), while global information is obtained through a dedicated aggregation module.
$$PS_i = FPS(PN_i) \quad PG_i = KNN(PN_i, PS_i)$$
(5)
The input dimension $PN_i$ is $[N, 3 + D]$, and the centroid dimension $PS_i$ is $[N', 3 + D]$ and the group dimension $PG_i$ is $[N', K, 3 + 2 * D]$. $K$ represents the nearest $K$ points of the center point.
(centroid), D is the feature dimension of the points of the current stage, and 3 is the most original \((X, Y, T)\) coordinate value. Importantly, it should be noted that the ordering of all points in the grouping and sampling process strictly adheres to the timestamp (T).
### 3.2.2 STANDARDIZATION
Next, each group undergoes a standardization process to ensure consistent variability between points within the group, as illustrated in this formula:
\[
PGS = \frac{PG - PS}{Std(PG)} \quad Std(PG_i) = \sqrt{\frac{\sum_{j=0}^{3n-1}(g_j - g)^2}{3n - 1}} \tag{6}
\]
\[g = [x_0, y_0, t_0, \ldots, x_n, y_n, t_n] \tag{7}\]
Where \(PG_i\) and \(PS_i\) are the subsets of \(PG\) and \(PS\), \(Std\) is the standard deviation, the dimension of \(Std(PG)\) is \([M]\) which is consistent with the number of sliding windows, and \(g\) is the set of coordinates of all points in the \(PG_i\).
### 3.2.3 FEATURE EXTRACTOR
Following the standardization of \(PG\) by dividing the variance by the subtracted mean, the feature extraction is performed using a Multi-Layer Perceptron (MLP) with a residual connection. This process encompasses two steps: local feature extraction and global feature extraction. The feature extractor with a bottleneck can be mathematically represented as:
\[
I(x) = f(BN(MLP_1(x))) \tag{8}
\]
\[
O(x) = BN(MLP_2(x)) \tag{9}
\]
\[
Ext(x) = f(x + O(I(x))) \tag{10}
\]
\(BN\) represents batch normalization layer, while \(f\) signifies the nonlinear activation function. Both local feature extraction and global feature extraction maintain identical input and output dimensions. The dimension increase occurs solely when combining the feature dimension D of the current point with the feature dimension D of the centroid during grouping, resulting in a final dimension of \(2 \times D\). The feature extractor takes an input dimension of \([B, N, K, D]\), and following local feature extraction, the dimension remains \([B, N, K, D]\), \(B\) represents batch size. We adopt the attention mechanism for aggregation, yielding an aggregated feature dimension of \([B, N, D]\). Subsequently, the aggregated feature map of \([B, N, D]\) is then processed through the global feature extractor, completing the feature extraction for the current stage.
### 3.2.4 TEMPORAL AGGREGATION
Conventional Point Cloud methods favor MaxPooling operations for feature aggregation because it is efficient in extracting the feature from one point among a group of points and discarding the rest. However, MaxPooling involves extracting only the maximum value along each dimension of the temporal axis. It is robust to noise perturbation but also ignores the temporal nuances embedded within the features. Conversely, the integration of attention mechanisms enhances the preservation of those nuanced but useful temporal attributes by aggregating features along the temporal axis through the attention value. To provide a more comprehensive exposition, we employ a direct attention mechanism within the \(K\) temporal dimensions to effectively aggregate features as shown in Figure 3. This mechanism enables the explicit integration of temporal attributes, capitalizing on the inherent strict ordering of the \(K\) points. The ensuing formula succinctly elucidates the essence of this attention mechanism:
\[
F_{local} = Ext(x) = (S_{t1}, S_{t2}, \ldots, S_{tk}) \tag{11}
\]
\[
A = SoftMax(MLP(F_{local})) = (a_{t1}, a_{t2}, \ldots, a_{tk}) \tag{12}
\]
\[
F_{aggre} = A \cdot F_{local} = S_{t1} \cdot a_{t1} + S_{t2} \cdot a_{t2} + \cdots + S_{tk} \cdot a_{tk} \tag{13}
\]
Upon the application of the local feature extractor, the ensuing features are denoted as \(F_{local}\), and \(S_{tk}\) mean the extracted feature of \(k\)th point in a group. The attention mechanism comprises an MLP layer with an input layer dimension of \(D\) and an output \(a_{tk}\) dimension of 1, along with softmax layers. Subsequently, the attention mechanism computes attention values, represented as \(A\). These attention values are then multiplied with the original features through batch matrix multiplication, resulting in the aggregated feature \(F_{aggre}\).
3.3 A-Bi-LSTM
The temporal features extracted through the hierarchical structure are independent and parallel, lacking recurrent mechanisms within the network. This distinctive attribute, referred to as 'implicit', contrasts with the conventional treatment of temporal information as an indexed process. Consequently, implicit temporal features inadequately capture the interrelations among events along the timeline, whereas explicit temporal features assume a pivotal role in facilitating the CPR task.
To explicitly capture temporal patterns, we introduce the LSTM network, which has been proven effective in learning temporal dependencies. For optimal network performance, controlled feature dimensionality, and comprehensive capture of bidirectional relationships in pose context, we adopt a bi-directional LSTM network with a lightweight design. The integration of bidirectional connections into the recurrent neural network (RNN) is succinctly presented through the following equation:
\[
h_t = f(W_h \cdot x_t + U_h \cdot h_{t-1} + b_h) \\
h'_t = f(W'_h \cdot x_t + U'_h \cdot h'_{t+1} + b'_h) \\
y_t = V \cdot h_t + b_y \\
y'_t = V' \cdot h'_t + b'_y
\]
\(x_t\) represents the feature vector at the \(t\)-th time step of the input sequence, while \(h_{t-1}\) and \(h'_{t+1}\) correspond to the hidden states of the forward and backward RNN units, respectively, from the previous time step. The matrices \(W_h\), \(U_h\), and \(b_h\) denote the weight matrix and bias vector of the forward RNN unit, while \(V\) and \(b_y\) represent the weight matrix and bias vector of its output layer. Similarly, \(W'_h\), \(U'_h\), and \(b'_h\) are associated with the weight matrix and bias vector of the backward RNN unit, and \(V'\) and \(b'_y\) pertain to the weight matrix and bias vector of its output layer.
The activation function, denoted as \(f(\cdot)\), can be chosen as sigmoid or tanh or other functions. The final output \(Y_a\) is aggregated at each moment using the attention mechanism, and \(\oplus\) means concat operation.
\[
Y_t = y_t \oplus y'_t \\
A = \text{SoftMax}(MLP(Y_t)) \\
Y_a = A \cdot Y_t
\]
3.4 LOSS FUNCTION
A fully connected layer with a hidden layer is employed to address the final 6-DOFs pose regression task. The displacement vector of the regression is denoted as \(\hat{p}\) representing the magnitude and direction of movement, while the rotational Euler angles are denoted as \(\hat{q}\) indicating the rotational orientation in three-dimensional space.
\[
Loss = \alpha ||\hat{p} - p||_2 + \beta ||\hat{q} - q||_2 + \lambda \sum_{i=0}^{n} w_i^2
\]
\(p\) and \(q\) represent the ground truth obtained from the dataset, while \(\alpha\), \(\beta\), and \(\lambda\) serve as weight proportion coefficients. In order to tackle the prominent concern of overfitting, especially in the end-to-end setting, we propose the incorporation of L2 regularization into the loss function. This regularization, implemented as the second paradigm for the network weights \(w\), effectively mitigates the impact of overfitting.
Algorithm 1 PEPNet pipeline
Input: Raw Event Cloud \(E\)
Parameters: \(N_p = 1024, R = 5e + 3, S_{num} = 3\)
Output: 6-DOFs pose \((\hat{p}, \hat{q})\)
1: Preprocessing
2: for \(j\) in len(E) do
3: \(P_i.append(e_{j-t_l}); j = l;\) where \(t_l - t_j = R\)
4: if (len(\(P_i\)) > \(N_p\)): \(i = i + 1;\)
5: end for
6: \(PN = \text{Normalize(Sampling(P))}\)
7:
8: Hierarchy structure
9: for stage in range(\(S_{num}\)) do
10: Grouping and Sampling(\(PN\))
11: Get \(PGS \in [B, N_{stage}, K, 2 * D_{stage-1}]\)
12: Local Extractor(\(PGS\))
13: Get \(F_{local} \in [B, N_{stage}, K, D_{stage}]\)
14: Attentive Aggregate(\(F_{local}\))
15: Get \(F_{aggre} \in [B, N_{stage}, D_{stage}]\)
16: Global Extractor(\(F_{aggre}\))
17: Get \(PN = F_{global} \in [B, N_{stage}, D_{stage}]\)
18: end for
19:
20: A-Bi-LSTM
21: Forward Get \(y_t \in [B, N_3, D_{S_{num}}/2]\)
22: Reverse Get \(y'_t \in [B, N_3, D_{S_{num}}/2]\)
23: Attention Get \(Y_a \in [B, D_{S_{num}}]\)
24:
25: Regressor
26: Get 6-DOFs pose \((\hat{p}, \hat{q})\)
3.5 Overall Architecture
Next, we will present the PEPNet pipeline in pseudo-code, utilizing the previously defined variables and formulas as described in Algorithm 1.
4 Experiment
In this section, we present an extensive and in-depth analysis of PEPNet’s performance on a public dataset, encompassing evaluations based on rotational and translational mean squared error (MSE), model parameters, floating-point operations (FLOPs), and inference time. Through a series of systematic ablation experiments, we experimentally validate the efficacy of each module. PEPNet’s training and testing are performed on a server furnished with an AMD Ryzen 7950X CPU, an RTX GeForce 4090 GPU, and 32GB of memory.
4.1 Dataset
We employ the widely evaluated event-based CPR dataset (Mueggler et al., 2017) collected using the DAVIS 240C. This dataset encompasses a diverse set of multimodal information, comprising events, images, IMU measurements, camera calibration, and ground truth information acquired from a motion capture system operating at an impressive frequency of 200 Hz, thereby ensuring sub-millimeter precision. We visualized various types of sequences as shown in Figure 4.
Two distinct methods to partition the dataset (Nguyen et al., 2019) have been benchmarked: random split and novel split. In the random split approach, the dataset is randomly selected 70% of all sequences for training and allocated the remaining sequences for testing. On the other hand, in the novel split, we divide the data chronologically, using the initial 70% of sequences for training and the subsequent 30% for testing.
4.2 Baseline
We perform a thorough evaluation of our proposed method by comparing it with SOTA event-based approaches, namely CNN-LSTM (Tabia et al., 2022) and AECRN (Lin et al., 2022). Moreover, we present results derived from other well-established computer vision methods, including PoseNet (Kendall et al., 2015), Bayesian PoseNet (Kendall & Cipolla, 2016), Pairwise-CNN (Laskar et al., 2017), LSTM-Pose (Walch et al., 2017), and SP-LSTM (Nguyen et al., 2019).
4.3 Random Split Results
Based on the findings presented in Table 1, it is apparent that PEPNet surpasses other models concerning both rotation and translation errors across all sequences. Notably, PEPNet achieves these impressive results despite utilizing significantly fewer model parameters and FLOPs compared to the frame-based approach. Moreover, PEPNet not only exhibits a remarkable 38% improvement in the average error compared to the SOTA CNN-LSTM method but also attains superior results across nearly all sequences. In addressing the more intricate and challenging hdr_poster sequences, while the frame-based approach relies on a denoising network to yield improved results (Jin et al., 2021), PEPNet excels by achieving remarkable performance without any additional processing. This observation strongly implies that PEPNet’s point cloud approach exhibits greater robustness compared to the frame-based method, highlighting its inherent superiority in handling complex scenarios.
Furthermore, we introduce an alternative variant, PEPNettiny, which integrates a lighter model architecture while preserving relatively strong performance. As depicted in Figure 3, PEPNet consists
Table 1: Random split results. The table presents the median error for each sequence, as well as the average error across the six sequences. It also presents the number of parameters and FLOPs for each model. Bold indicates the most advanced result, while underline signifies the second-best result.
| Network | PoseNet | Bayesian PoseNet | Pairwise-CNN | LSTMPose | SP-LSTM | CNN-LSTM | PEPNet | PEPNettiny |
|------------------|---------|------------------|--------------|----------|---------|----------|--------|------------|
| Parameter | 12.43M | 22.35M | 22.34M | 16.05M | 135.25M | 12.63M | 0.774M | **0.064M** |
| FLOPs | 1.584G | 3.679G | 7.399G | 1.822G | 15.62G | 1.960G | 0.459G | **0.033G** |
| shapes_rotation | 0.109m,7.388° | 0.142m,9.557° | 0.095m,6.332° | 0.032m,4.439° | 0.025m,2.256° | 0.01m,1.682° | **0.005m,1.372°** | 0.006m,1.592° |
| box_translation | 0.193m,6.977° | 0.190m,6.636° | 0.178m,6.153° | 0.063m,6.215° | 0.036m,2.195° | **0.013m,0.873°** | 0.017m,0.861° | 0.031m,1.516° |
| shapes_translation | 0.238m,6.035° | 0.264m,6.355° | 0.203m,5.146° | 0.096m,5.018° | 0.035m,2.117° | 0.00m,1.782° | **0.00m,1.782°** | 0.00m,1.769° |
| dynamic_6dof | 0.282m,6.332° | 0.268m,5.963° | 0.256m,5.146° | 0.090m,4.206° | 0.032m,2.046° | 0.01m,1.662° | **0.01m,1.645°** | 0.01m,1.647° |
| hdr_poster | 0.282m,8.513° | 0.290m,8.710° | 0.232m,7.234° | 0.108m,6.186° | 0.051m,3.354° | 0.03m,2.421° | **0.01m,0.991°** | 0.028m,1.863° |
| poster_translation | 0.266m,6.516° | 0.264m,5.459° | 0.211m,6.439° | 0.079m,5.734° | 0.036m,2.074° | 0.02m,1.468° | **0.01m,0.588°** | 0.01m,0.953° |
Average 0.231m,7.455° 0.241m,7.593° 0.194m,6.211° 0.07m,5.721° 0.036m,2.341° 0.01m,1.591° **0.01m,0.904°** 0.01m,1.306°
Table 2: Novel split results. Referred to as Table I, showcases identical information. To assess the model’s runtime, we conduct tests on a server platform, specifically focusing on the average time required for inference on a single sample.
Although PEPNettiny demonstrates the potential to outperform previous SOTA results in terms of the final average performance, it reveals evident weaknesses and underfitting when handling more complex sequences, such as hdr_poster and box_translation. The limitations in the abstraction ability of PEPNettiny become apparent. It is important to acknowledge that PEPNet’s results might improve with a larger dataset, indicating the significant impact of data size on the model’s performance.
4.4 Error Distribution
Figure 5 illustrates the error distribution of PEPNet across six distinct sequences using the random split method, specifically: shape rotation, box translation, shape translation, dynamic 6-dof, hdr poster, and poster translation. To enhance clarity, the top and bottom boundaries of the box represent the first and third quartiles, respectively, indicating the inter-quartile range (IQR). The median is denoted by the band within the box. It is observed that the IQR of the translation error approximately locates between 0.004m and 0.024m, while the orientation error ranges from 0.4° to 1.9°.
Among the six sequences, shape rotation and box translation display the poorest results in rotation and translation, respectively, primarily due to the inherent complexity of the dataset. As the scene becomes more intricate and the resolution increases, such as in the hdr poster, the model is challenged to exhibit its robustness. Notably, PEPNet demonstrates enhancements of approximately 50% compared to the SOTA model in this scenario.
4.5 Novel Split Results
To assess the model’s robustness, we adopt the novel split as an evaluation criterion, as shown in Table 2. During the training process, we observe a more pronounced overfitting phenomenon in PEPNet compared to the random split. We attribute this observation to the disparities in data distributions between the trainset and the testset, as well as the limited data size. Contrary to the
methods we compared, PEPNet does not necessitate pre-trained weights. For instance, SP-LSTM relies on pre-trained VGG19 weights from Imagenet, while AECRN requires synthetic heuristic depth and an extensive pretraining process.
To address overfitting, PEPNet employs conventional methods that yield consistent and comparable results with the SOTA on three shape sequences that are displayed in the network column of Table 2. It is essential to note that AECRN adopts a hybrid approach, combining neural network regression for scene coordinates with derivable RANSAC for pose estimation. Moreover, this method incurs significant time consumption, with even the SOTA DSAC* algorithm taking nearly 30ms, excluding additional time for format conversion. This time constraint presents compatibility challenges with the low-latency nature of event cameras. In contrast, PEPNet can execute on a server in just 6.7ms, with the main time-consuming module being grouping and sampling. Furthermore, with potential field programmable gate array (FPGA) or application-specific integrated chip (ASIC) support for these operations, PEPNet’s performance can be further accelerated.
### 4.6 Attention Visualization
As shown in Figure 6, We observe that the values exhibit larger at both the start and end. Our conjecture posits that during the process of camera pose relocalization, the model may intensify its emphasis on the distinctions in features between the initial and terminal points, and regress the 6DOFs pose through the differences, similar to geometric methods Mueggler et al. (2018); Gallego et al. (2015).
### 4.7 Ablation Study
In order to validate the efficacy of key modules, we conducted ablation experiments focusing on three primary components: hierarchy structure, Bi-LSTM, and attention. These experiments are designed to evaluate rotation and translation errors on the shape translation sequence with random split. The combined error (T+R) is measured after processing.
Our experimental setup comprises four distinct conditions, as illustrated in Table 3. Condition 1 represents the sole utilization of the hierarchy structure (HS), while Condition 2 combines the ordinary LSTM. Condition 3 incorporates the bidirectional LSTM, and Condition 4 integrates the attention mechanism for feature aggregation.
The ablation experiments reveal significant insights. Experiments 1 and 2 demonstrate that augmenting LSTM enhances the extraction of explicit temporal features. Moreover, experiments 2 and 3 reveal the effectiveness of the bidirectional LSTM in extracting motion information. Additionally, experiments 3 and 4 confirm the notable impact of attention in feature aggregation, resulting in a substantial reduction in error rates.
### 5 Conclusion
In this paper, we introduce an end-to-end CPR network that operates directly on raw event clouds without frame-based preprocessing. PEPNet boasts an impressively lightweight framework that adeptly extracts spatial and temporal features, leading to SOTA outcomes on publicly accessible datasets. Diverging from traditional frame-based approaches, our method prioritizes preserving the inherent distribution of the event camera output, capitalizing on its sparse nature to achieve extraordinary capabilities for ultra-low-power CPR applications.
| Condition | HS | LSTM | Bi-LSTM | Aggregation | Translation | Rotation | T+R |
|-----------|----|------|---------|-------------|------------|----------|-----|
| 1 | ✓ | | | Max | 0.015m | 0.884° | 3.04|
| 2 | ✓ | | | Temporal | 0.014m | 0.786° | 2.77|
| 3 | ✓ | ✓ | | Max | 0.014m | 0.833° | 2.85|
| 4 | ✓ | | ✓ | Max | 0.014m | 0.813° | 2.82|
| 5 | ✓ | | ✓ | Temporal | **0.011m** | **0.582°**| **2.12**|
Table 3: Abalation Study for three key modules. T+R = Translation + Rotation·π/180 (m+rad)
REFERENCES
Vassileios Balntas, Shuda Li, and Victor Prisacariu. Relocnet: Continuous metric learning relocalisation using neural nets. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 751–767, 2018.
Eric Brachmann and Carsten Rother. Visual camera re-localization from rgb and rgb-d images using dsac. IEEE transactions on pattern analysis and machine intelligence, 44(9):5847–5865, 2021.
Eric Brachmann, Alexander Krull, Sebastian Nowozin, Jamie Shotton, Frank Michel, Stefan Gumhold, and Carsten Rother. Dsac-differentiable ransac for camera localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6684–6692, 2017.
Tobi Delbruck and Manuel Lang. Robotic goalie with 3 ms reaction time at 4% cpu load using event-based dynamic vision sensor. Frontiers in neuroscience, 7:223, 2013.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020.
Guillermo Gallego, Christian Forster, Elias Mueggler, and Davide Scaramuzza. Event-based camera pose tracking using a generative event model. arXiv preprint arXiv:1510.01972, 2015.
Guillermo Gallego, Jon EA Lund, Elias Mueggler, Henri Rebecq, Tobi Delbruck, and Davide Scaramuzza. Event-based, 6-dof camera tracking from photometric depth maps. IEEE transactions on pattern analysis and machine intelligence, 40(10):2402–2412, 2017.
Guillermo Gallego, Tobi Delbrück, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew J Davison, Jörg Conradt, Kostas Daniilidis, et al. Event-based vision: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(1):154–180, 2020.
Yulan Guo, Hanyun Wang, Qingyong Hu, Hao Liu, Li Liu, and Mohammed Bennamoun. Deep learning for 3d point clouds: A survey. IEEE transactions on pattern analysis and machine intelligence, 43(12):4338–4364, 2020.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
Yifan Jin, Lei Yu, Guangqiang Li, and Shumin Fei. A 6-dofs event-based camera relocalization system by cnn-lstm and image denoising. Expert Systems with Applications, 170:114535, 2021.
Alex Kendall and Roberto Cipolla. Modelling uncertainty in deep learning for camera relocalization. In 2016 IEEE international conference on Robotics and Automation (ICRA), pp. 4762–4769. IEEE, 2016.
Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, pp. 2938–2946, 2015.
Zakaria Laskar, Iaroslav Melekhov, Surya Kalia, and Juho Kannala. Camera relocalization by computing pairwise relative poses using convolutional neural network. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 929–938, 2017.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436–444, 2015.
Patrick Lichtsteiner, Christoph Posch, and Tobi Delbruck. A 128 × 128 120 db 15 µs latency asynchronous temporal contrast vision sensor. IEEE journal of solid-state circuits, 43(2):566–576, 2008.
|
QHfIe4chR5
|
Long-distance Targeted Poisoning Attacks on Graph Neural Networks
Anonymous authors
Paper under double-blind review
Abstract
GNNs are vulnerable to targeted poisoning in which an attacker manipulates the graph to cause a target node to be mis-classified to a label chosen by the attacker. However, most existing targeted attacks inject or modify nodes within the target node’s $k$-hop neighborhood to poison a $k$-layer GNN model. In this paper, we investigate the feasibility of long-distance attacks, i.e., attacks where the injected nodes lie outside the target node’s $k$-hop neighborhood. We show such attacks are feasible by developing a bilevel optimization-based approach, inspired by meta-learning. While this principled approach can successfully attack small graphs, scaling it to large graphs requires significant memory and computation resources, and is thus impractical. Therefore, we develop a much less expensive, but approximate, heuristic-based approach that can attack much larger graphs, albeit with lower attack success rate. Our evaluation shows that long-distance targeted poisoning is effective and difficult to detect by existing GNN defense mechanisms. To the best of our knowledge, our work is the first to study long-distance targeted poisoning attacks.
1 Introduction
Many recent papers have proposed attacks on GNNs that allow an attacker to cause mis-predictions by strategically modifying the graph structure (Zügner & Günnemann [2019], Xu et al. [2019], Dai et al. [2018], Chen et al. [2018; 2020], Zang et al. [2020], Chang et al. [2020], Bojchevski & Günnemann [2019], Wang & Gong [2019], Geisler et al. [2021a], Mujkanovic et al. [2022], Wang et al. [2022]), modifying the features of an existing node in the graph (Zügner et al. [2018], Liu et al. [2019], Wu et al. [2019], Ma et al. [2020]), or injecting new nodes with carefully crafted features (Sun et al. [2020], Jiang et al. [2022], Zou et al. [2021], Tao et al. [2021], Chen et al. [2022], Ju et al. [2022], Wang et al. [2020] [2018], Nguyen Thanh et al. [2023]). Depending on when the adversarial perturbation occurs, these attacks can be classified as poisoning (training time) attacks or evasion (test time) attacks. Furthermore, depending on whether the attacker aims to misclassify a specific node or degrade the overall prediction accuracy, the attacks can be targeted (local) or untargeted (global) attacks. In this paper, we investigate targeted poisoning on node classification GNNs, where an attacker’s goal is to flip a selected target node’s label to an attacker-chosen label.
Existing attacks of this type assume that attackers have great flexibility in how they modify the graph, e.g., they can add edges between any chosen pairs of nodes (Zügner et al. [2018], Chang et al. [2020], Dai et al. [2018], Chen et al. [2018], Xu et al. [2019], Zang et al. [2020], Geisler et al. [2021a], Wang & Gong [2019], Bojchevski & Günnemann [2019]) or connect a new fake node directly to the target node (Wang et al. [2020], Chen et al. [2022], Dai et al. [2023], Xi et al. [2021]). However, changing nodes close to the target is undesirable because it makes attack detection easy: many existing tools (Ying et al. [2019], Huang et al. [2022], Luo et al. [2020], Yuan et al. [2021], Duval & Malliaros [2021]) measure the influence of nodes within a target’s $k$-hop neighborhood and attack nodes generally have high influence values.
In this paper, we consider a more practical but restricted scenario, where an attacker can inject new nodes but can only connect them to nodes beyond a threshold distance from the target. In other words, we seek attacks that, given a threshold $k$ and target node $v_t$, can insert (inject) new nodes or add edges connecting injected nodes to existing nodes that lie outside $v_t$’s $k$-hop neighborhood. We call such attacks long-distance targeted poisoning attacks.
Inspired by meta learning (Bengio [2000], Finn et al. [2017]), we design a bilevel optimization-based method to craft a long distance attack, called MetaLDT (aka Meta learning based Long Distance Targeted poisoning). Like the previous application of meta learning to GNN poisoning (Zügner & Günnemann [2019]), we treat the graph perturbation, which includes injected nodes’ features and their connections to existing nodes, as a hyperparameter to learn. However, unlike (Zügner & Günnemann [2019]), we introduce several important constraints to the optimization to make the attack hard to detect. These include constraints that prevent changes to the target’s $k$-hop neighborhood, that avoid noticeably changing node degree distribution, and that maintain graph homophily. Our evaluation shows that MetaLDT achieves excellent poison success rates over regular GNNs ($>84\%$) as well as robust versions that have been fortified with state-of-the-art defenses. However, MetaLDT has high computation and memory requirements, and thus cannot scale: we found that we could only use it with small graphs with at most a few thousand nodes (e.g., Cora and Citeseer), and even in these cases an attack could take a day or more. Thus, the attack cannot target most practical graphs.
To address this, we examine the graphs generated by MetaLDT to learn common patterns. We find that in order to flip a target node’s label to $y$, MetaLDT’s optimization tries to make the target node’s embedding—the last GNN layer’s output before the softmax classifier—“collide” with the embedding of nodes labelled $y$. Based on this observation, we design MimicLDT, a scalable heuristic attack that mimics MetaLDT’s effect. MimicLDT attaches a few fake nodes to a small subset of existing nodes (aka points of attack) with target label $y$ and uses gradient-based optimization to craft the features of the fake nodes so that the points of attack become close to the target node in the embedding space. While less effective than MetaLDT, our evaluation shows that MimicLDT nevertheless achieves decent poison success rate ($>55\%$) and can scale to graphs with hundreds of thousands of nodes (e.g. arXiv).
In summary, we make the following contributions:
• We study a new type of targeted poisoning attack on GNNs that does not modify the target node’s $k$-hop neighborhood. We call such an attack long distance poisoning.
• We propose two optimization-based method, MetaLDT and MimicLDT, to achieve targeted poisoning by injecting fake nodes that lie beyond the target node’s $k$-hop neighborhood. MetaLDT is based on the principled approach of meta learning but is too expensive to use with graphs containing more than a few thousand nodes. MimicLDT builds upon the insights discovered by MetaLDT to perform direct optimization without meta learning and can scale to much larger graphs.
• We evaluate our attack on different graphs and defenses. To the best of our knowledge, we are the first to show the existence and effectiveness of long distance targeted poisoning attacks.
2 Related work
We discuss related work on targeted attacks for GNN-based node classification. For a broader discussion including untargeted attacks, attacks on tasks other than node classification as well as GNN defenses, we refer readers to existing surveys (Lin et al. [2020a], Zheng et al. [2021]).
Targeted attacks can occur during training time (poisoning attacks) or test time (evasion attacks). For any given $k$-layer GNN model architecture, the target node’s label prediction is a function of (1) model weights, (2) the input features of the target node itself, and (3) the input features of the target’s $k$-hop neighbors. Thus, in order to manipulate the target’s label prediction, the attacker can try to corrupt any of these three factors. Attacks of type (2), aka corrupting the target’s input features, is a well-studied problem in non-graph domains (Goodfellow et al. [2015], Shafahi et al. [2018], Madry et al. [2018]) such as images, text, and time series, and can be straightforwardly extended to the graph setting. Thus, existing GNN attacks, including both poisoning and evasion attacks, focus on the adversarial manipulation of (3), aka the target’s $k$-hop neighborhood, achieved through adding/removing edges (referred to as structure perturbation attacks), or adding fake nodes (referred to as injection attacks). The manipulation of (3) can be further categorized into direct vs. indirect attacks depending on whether the target’s direct or $k$-hop neighborhood is modified. There is a vast collection of attacks of type (3); most perturb graph structure, e.g. NetAttack (Zügner et al. [2018]), FGA (Chen et al. [2018]),
MGA (Chen et al., 2020), PGD (Xu et al., 2019), DICE (Waniek et al., 2018), GUA (Zang et al., 2020), RL-S2V (Dai et al., 2018), Bojchevski et al. (Bojchevski & Günnemann, 2019), GF-Attack (Chang et al., 2020), GAFNC (Jiang et al., 2022), IG-JSMA (Wu et al., 2019), Wang et al. (Wang et al., 2022) and PR-BCD/GR-BCD (Geisler et al., 2021a). Some also modify existing nodes’ labels or features, e.g., (Liu et al., 2019; Zügner et al., 2018; Wu et al., 2019). Others inject fake nodes, e.g., Wang et al. (Wang et al., 2018), TDGIA (Zou et al., 2021), AFGSM (Wang et al., 2020), G²A2C (Ju et al., 2022) and G-NIA (Tao et al., 2021).
Our work differs from existing ones in that we aim to achieve targeted poisoning by changing the graph $G$ used to train the GNN model. The key advantage of this approach is that we can make the resulting attack long distance, by completely avoiding any modification to the target’s k-hop neighborhood. This is desirable for two reasons: one, it makes the attack difficult to detect, foiling analysis tools like GNNExplainer (Ying et al., 2019) and others (Luo et al., 2020; Yu & Gao, 2022; Yuan et al., 2021; Duval & Malliaros, 2021). Two, it makes the attack easier to launch since there are many more potential attack points beyond the target’s k-hop neighborhood.
Recent work such as HAO (Chen et al., 2022) and ADIMA (Tao et al.) aim to make attacks hard to detect; the former adds a homophily constraint and the latter uses a GAN-style framework to train a discriminator to distinguish subgraphs that include fake nodes from those that don’t. Like HAO, our attack also tries to preserve homophily. Existing attacks all assume that the attacker can add/remove edges to any existing node. A recent proposal tries to make attacks more realistic by assuming that the attacker can only use a subset of nodes as attack points (Ma et al., 2020). Our work can also be extended to this setting by restricting the set of attack points.
Our attack assumes that the attacker knows the process used to train the model being attacked, this information includes knowledge of what defenses are used. Prior work (Zügner et al., 2018; Zügner & Günnemann, 2019) has made similar assumptions, and our approach is inspired by these proposals.
### 3 Background and Problem Definition
In this section, we define the terminology we use, formalize our setting, and state our assumptions about the attacker’s capabilities. We use the standard terminology and notation for graphs and GNNs:
**Graphs** We use $G = (\mathcal{V}, \mathcal{E})$ to denote a graph with nodes $\mathcal{V} = \{v_1, v_2, ..., v_n\}$ and edges $\mathcal{E} = \{e_1, e_2, ..., e_m\}$, use $A \in \{0, 1\}^{n \times n}$ to denote graph $G$’s adjacency matrix, and $X \in \mathbb{R}^{n \times d}$ to denote the $d$-dimensional feature matrix for each node $v \in \mathcal{V}$.
**GNN based node classification** Our work focuses on node classification in the transductive learning setting: our training data consists of a graph $G$, a subset of whose nodes are labeled, denoted as $\mathcal{V}_L \subset \mathcal{V}$. Given this training data, node classification aims to learn a function that can predict labels for unlabeled nodes $\mathcal{V}_U := \mathcal{V} \setminus \mathcal{V}_L$.
For node classification, the computation can be viewed as consisting of two parts: first, one uses a GNN to compute an embedding for each node that represents the node’s features and neighborhood, and second, one feeds this embedding to a classification layer to get a set of logits for label prediction. In what follows, we use $f_\theta$ to represent the GNN model with $\theta$ representing its weights, and we use the term node embedding to represent the embedding computed by $f_\theta$.
#### 3.1 Attack Model
**Attacker’s goal** We consider targeted label-flipping attacks, where the attacker selects a target node $v_t$ and target label $y_t$, and the attack aims to alter the graph used to train the GNN $f_\theta$ so that it predicts label $y_t$ for node $v_t$.
**Attacker’s knowledge** We assume the attacker has access to the training data, including the original graph $G$, node features, and labels, and also knows the training procedure (including any changes made to the training to improve model robustness). Note that we do not assume knowledge of model weights, but if available this can be used to further reduce the cost of MimicLDT.
**Attacker’s capability** We constrain the attacker to long-distance node injection attacks. This means that the attacker cannot modify an existing node’s features, remove existing edges, or add edges that connect two nodes that are already present in the graph. In addition, constraining to long-distance attacks means the attacker can only add edges to nodes that are outside the target’s k-hop neighborhood. The attacks we consider add (inject) one or more new nodes, and add edges connecting these injected...
nodes to each other or to existing nodes that are outside the target’s k-hop neighborhood. Finally, we constrain the number of nodes and edges that an attacker can add.
More formally, we define **node injection attacks** as follows: an attack that generates a poisoned graph \( G' = (V', E') \) by injecting a set of malicious nodes \( V_{inj} \) as \( A' = \begin{bmatrix} A & B_{inj} \\ B_{inj}^T & O_{inj} \end{bmatrix}, X' = \begin{bmatrix} X \\ X_{inj} \end{bmatrix} \), where \( X_{inj} \) are the injected nodes features, \( B_{inj} \) is the adjacency matrix between injected and existing nodes in \( G \). We refer to an existing node that connects to any injected node as an attack point. We limit the number of injected nodes, and their degree, i.e., we require that \( |V_{inj}| \leq \triangle \in \mathbb{Z} \) and \( 1 \leq \text{deg}(i) \leq b \in \mathbb{Z}, \forall i \in V_{inj} \) for some threshold \( \triangle \) and \( b \).
We define **long-distance node injection attacks** as follows: a node injection attack where no attack point is within the target node \( v_t \)'s k-hop neighborhood. More formally, in a long distance attack on a k-layer GNN, \( \forall v_a \in V_a, d(v_a, v_t) > k \) where \( V_a \) is the set of existing-graph nodes connected to injected nodes (aka attack points), and \( d(v_a, v_t) \) is the path length from \( v_a \) to \( v_t \).
### 3.2 Problem Formulation
We start by formalizing attacks as an optimization problem, which we can solve using a meta-learning inspired approach. GNN attacks can generally be formalized as:
\[
\min_{G'} L_{atk}(f_{\theta^*}(G')) \quad s.t. \quad \theta^* = \arg\min_{\theta} L_{train}(f_{\theta}(G')).
\]
where \( L_{train} \) is the general loss function used when training model \( f_{\theta} \), which we assume the attacker knows. Therefore, our goal is to find a graph, \( G' \), that minimizes the attacker’s loss \( L_{atk} \).
A targeted label-flipping attack requires incorporating the target node and desired label into the loss-function. More precisely, we need a loss function that maximize the target node’s logit (i.e., the model’s confidence score for a label) for the attacker-chosen label. Therefore, we use \( L_{atk} = -M_{G'}(v_t)[y_t] \), where \( M_{G'}(v_t) = f_{\theta^*}(v_t; G') \), which maximizes the probability that target node \( v_t \) has label \( y_t \). Beyond this, and similar to prior work (Chen et al., 2022), we want to ensure that the attack is stealthy and injected nodes do not differ significantly from existing nodes. To do so, we incorporate a homophily term in \( L_{atk} \) that minimizes feature differences between an injected node and its neighbors. Our final attacker loss function, \( L_{atk} \), is thus:
\[
L_{atk} = -M_{G'}(v_t)[y_t] - \beta C(G')
\]
\[
C(G') = \frac{1}{|V_{inj}|} \sum_{u \in V_{inj}} \text{sim}(r_u, X_u), \quad \text{where } r_u = \sum_{j \in N(u)} \frac{1}{\sqrt{d_j} \sqrt{d_u}} X_j
\]
where \( \beta \) is a hyperparameter that controls how important homophily is, \( \text{sim}(\cdot) \) measures cosine similarity, \( N(u) \) is the set of nodes neighboring node \( u \), and \( d_u \) is the node degree. The homophily formulation above is based on Chen et al. (2022).
### 4 The MetaLDT Attack via Optimization
We start by describing an optimization approach to solving the problem defined in the previous section. Given information about how the model is trained as well as access to the original graph \( G \), a target node \( v_t \), and a target label \( y_t \), our optimization algorithm produces an attack graph \( G' \). To do this, we start with an initial attack graph \( G'_0 \) and iteratively modify it to minimize the attacker’s loss function \( L_{atk} \). Our iterative approach, inspired by meta-learning (Bengio, 2000; Zügner & Günnemann, 2019), treats the attack graph’s edges and features as hyperparameters, which it optimizes.
At the start of the process, MetaLDT produces an initial graph \( G'_0 \) by injecting \( \Delta \) new nodes into the input graph \( G \). These injected nodes have zeroed-out features, and no edges connecting them to any other node. In each iteration \( i \) (\( i \geq 0 \)), MetaLDT updates graph \( G'_i \) and produces the graph \( G'_{i+1} \), which the next iteration operates on. When producing \( G'_{i+1} \), we can either alter \( G'_i \)'s adjacency matrix (thus adding or removing edges) or feature matrix (thus changing node features), and MetaLDT uses alternating minimization to update both. Specifically, this means that our iterations alternate between changing the adjacency matrix and changing the feature matrix.
In each iteration \( i \), we determine updates to the feature or adjacency matrix (as appropriate) using a computed meta-gradient \( \nabla_{G'_i}^{\text{meta}} \), which we compute by unrolling the model training loop for \( T \) epochs. Formally:
\[
\nabla_{G'_i}^{\text{meta}} = \nabla_{G'_i} L_{atk}(f_{\theta_T}(G'_i))
\]
\[
= \nabla_f L_{atk}(f_{\theta_T}(G'_i)) \cdot [\nabla_{G'_i} f_{\theta_T}(G'_i) + \nabla_{\theta_T} f_{\theta_T}(G'_i) \cdot \nabla_{G'_i} \theta_T]
\]
where the last term is recursively defined as \( \nabla_{G_t} \theta_{t+1} = \nabla_{G_t} \theta_t - \alpha \nabla_{G_t} \nabla_{\theta_t} L_{\text{train}}(f_{\theta_t}(G_t')) \), and \( \alpha \) is the learning rate. Observe that computing this meta-gradient does not require access to the model used by the attacker’s victim, but requires running \( T \) training epochs, using the same training setting (i.e., the same algorithm and approach) as used by the victim. In the rest of the paper, we use the term surrogate model to refer to models trained by the attacker using the same process as the victim.
### 4.1 Changing Graph Structure
Iterations that alter the adjacency matrix assume that the node feature matrix is a constant. Consequently, we can treat \( \nabla_{G_t'}^{\text{meta}} \) as the meta-gradient for the graph \( G_t' \)'s adjacency matrix \( A_t \), and can compute a meta-score \( S(u,v) = \nabla_{G_t'}^{\text{meta}} [a_{uv}] \cdot (-2 \cdot a_{uv} + 1) \) for each pair-of-nodes \((u,v)\), where \([a_{uv}]\) indicates that we indexed the value at position \((u,v)\) in \( \nabla_{G_t'}^{\text{meta}} \)'s adjacency matrix.
Our approach is predicated on the observation that altering the adjacency matrix for the pair \((u,v)\) with the highest computed meta-score \( S(u,v) \) is likely to best decrease the attacker’s loss \( L_{\text{atk}} \).
However, our assumptions (\$3) limit what adjacency matrix modifications the attacker can perform, and so we only consider a subset of node pairs in this process. In particular, we impose the following constraints on the node-pairs we consider: (a) either \( u \) or \( v \) must be an injected node; (b) neither \( u \) nor \( v \) can be within \( v_t \)'s k-hop neighborhood, thus ensuring that the attacks are long-distance; and (c) that an injected node \( u \) has no more than one-edge connecting it to a node in the original graph \( G \), a constraint we add to avoid cases where the optimization spends all of its time optimizing a single injected node.
We evaluate the effect of the last optimization in Appendix C.3.
Thus, iterations that change the adjacency matrix compute a score \( S(u,v) \) for any pair of nodes \((u,v)\) that meet our constraints, identify the pair \((u_m,v_m)\) with the largest score, and then adds edge \((u_m,v_m)\) if none exists or removes it if it already exists.
### 4.2 Changing Node Features
Similarly, iterations where node-features are changed assume that the adjacency matrix is a constant, and therefore use \( \nabla_{G_t'}^{\text{meta}} \) as the meta-gradient for the feature matrix. However, in this case, we do not use \( \nabla_{G_t'}^{\text{meta}} \) to compute a scoring function to select and then update node features for a single node.
Instead, we use \( \nabla_{G_t'}^{\text{meta}} \) to compute feature gradients which we use to update \( G_t' \)'s feature matrix. Care must be taken when doing so, since we assume attacker cannot change features for any nodes already present in the input graph \( G \) (\$3). We impose this constraint by zeroing out the corresponding elements in \( \nabla_{G_t'}^{\text{meta}} \)'s feature matrix, and in what follows we refer to the resulting matrix as \( X_{\nabla_{G_t'}} \). Given this, we compute:
\[ \tilde{X}_{G_t'+1} = X_{G_t'} - \alpha X_{\nabla_{G_t'}} \]
where \( X_{G_t'} \) is \( G_t' \)'s feature matrix, and \( \alpha \) is the learning rate.
Empirically, we found that a single gradient update is often insufficient, so in practice each iteration repeats this process \( q \) times (and we compute a new \( \nabla_{G_t'}^{\text{meta}} \) after each update).
## 5 The MimicLDT Attack via Embedding Collision
Our evaluation (\$5) shows that MetaLDT is effective, but has such high memory and computational requirements (Table 2) that it is impractical to use it with common graphs (e.g., the Arxiv dataset). MimicLDT is a cheaper attack that uses heuristics to mimic MetaLDT’s behavior. Our heuristics derive from the following observations about MetaLDT generated attack graphs \((G')\):
(a) MetaLDT iterations reduce the embedding space distance (as determined by the surrogate GNN) between \( v_t \) and existing nodes with the attacker’s chosen target label \( y_t \). We empirically demonstrate this phenomenon in Figure 2 by using MetaLDT to attack a GCN that uses the Cora dataset. We give more details about this setting in \$6. The graph shows how the average L2-distance, in each iteration’s surrogate model’s embedding space, between \( v_t \) and nodes whose ground truth label is \( y_t \) varies across iterations, and we observe that the optimization minimizes this distance.
(b) Edges between existing and injected nodes in \( G' \) tend to connect injected nodes to nodes labeled \( y_t \). We hypothesize that this is in support of the previous observation: an injected node, that connects to a node \( v \) labeled \( y_t \), can reduce the embedding distance between \( v_t \) (the target) and \( v \).
(c) The edges connecting pairs of injected nodes in \( G' \) do not appear to have any noticeable patterns. This leads us to hypothesize that it is sufficient to randomly connect injected nodes with each other.
Next, we describe how MimicLDT’s heuristics allow it to efficiently generate attack graphs. Similar to MetaLDT, MimicLDT takes as input an initial graph \( G \), a target node \( v_t \), a target label \( y_t \), and assumes...
knowledge of how the attacked GNN has been trained. MimicLDT works as follows: (a) First, it trains a surrogate model \( f_0 \) using \( G' \), which is used for the entire optimization. Much of MimicLDT’s performance improvement is because it only needs to train one surrogate model. (b) Next, MimicLDT generates the structure of the attack graph \( G' \) based on heuristics. (c) Finally, it optimizes injected node features in \( G' \) to produce the final attack graph.
### 5.1 Determining Graph Structure
Similar to MetaLDT, MimicLDT generates an initial attack graph \( G' \) from \( G \). To construct \( G' \), MimicLDT first selects a set of nodes, \( V_a \), in \( G \) whose label is \( y_t \), and who lie outside the target’s (i.e., \( v_t \)’s) k-hop neighborhood. We refer to the nodes \( V_a \) as attack points, and a hyperparameter \( r \) determines \( |V_a| \), the number of attack points chosen.
Next, for each attack point, \( v_a \in V_a \), MimicLDT injects \( \Phi \) nodes \( V_{ga} \) and connects them (directly or indirectly) to \( v_a \). To do so, MimicLDT iterates over possible edges connecting nodes in the set \( V_{ga} \cup \{v_a\} \) (i.e., edges that either connect injected nodes to each other or to the attack point), and add each edge to the graph with probability \( p = 0.5 \). Finally, it prunes any nodes in \( V_{ga} \) not reachable to any \( v_a \). Therefore, the final set of injected nodes may have fewer than \( \Phi \) nodes.
The final graph structured produced by MimicLDT thus consists of all nodes and edges in \( G \), and a set of injected nodes that are connected to each other and attack points. In the rest of this section, we use \( V_i \) to refer to the set of injected nodes in \( G' \), and \( B(v_i) \) to refer to the attack point whose k-hop neighborhood contains the injected node \( v_i \in V_i \).
### 5.2 Determining Injected Node Features
The algorithm above generates a graph \( G' \) that contains all nodes in \( G \) and the set \( V_i \) of injected nodes. We do not assign labels to any of the injected nodes, and formulate the problem of assigning feature vectors to these injected nodes as an optimization problem, which we describe below.
**Optimization formulation.** Our feature optimization problem aims to meet two goals. First, based on our observations form MetaLDT, we try to ensure that the final node embedding for any selected attack point \( v_a \in V_a \), \( h_{v_a}^{(L)} \), is close to the target’s final node embedding \( h_{v_t}^{(L)} \). Second, similar to MetaLDT, we try to ensure that an injected node \( v_i \)'s feature vector \( X_{v_i} \) is similar to that of its attack point \( B(v_i) \).
Taken both optimization goals into account, our final optimization formulation is:
\[
X^*_{V_i} = \arg\min_{X_{V_i}} L_{atk},
\]
\[
L_{atk} = - \left( \frac{1}{|V_a|} \sum_{v_a} \text{Sim}_f(h_{v_a}^{(L)}, h_{v_t}^{(L)}) + \beta \times \frac{1}{|V_i|} \sum_{v_i} \text{Sim}_{in}(X_{v_i}, X_{B(v_i)}) \right)
\]
In this formulation, \( X_{V_i} \) represents a \( |V_i| \times d \) dimensional matrix whose columns are features of nodes in \( V_i \); \( \text{Sim}_f \) is a metric function (specific to the GNN used) that measures similarity between a pair of final node embeddings; and \( \text{Sim}_{in} \) is a metric function (specific to the input graph \( G \)) that measures similarity between a pair of nodes’ feature vectors.
The second term of the optimization in Eq [5] aims to preserve homophily, but differs from the formulation used in MetaLDT, and prior work [Chen et al., 2022]: rather than maximizing the node-centric homophily score which aggregating with neighborhood for all injected nodes, this formulation maximizes similarity between the attack points and injected nodes. We found that this change in formulation improved our performance, and our empirical results (in Appendix A.3) show that it does not noticeably impact the homophily scores for injected nodes. Similar to MetaLDT, the \( \beta \) hyperparameter allows attackers to decide how much the generated attack prioritizes homophily.
---
1 In the evaluation we use \( 0 < r < 1 \), and select \( r|V_L| \) points (\( V_L \) is the set of labeled training nodes in \( G \)).
Computing feature vectors. We use a stochastic gradient descent based optimizer to compute feature vectors. Our evaluation uses GNNs to classify nodes in citation graphs, and consequently we use cosine similarity to measure similarity between node features (i.e., \(Sim_{in}\) is cosine similarity), and we use \(L2\)-norm to measure similarity between final node embeddings (\(Sim_f\)).
We considered various initial values for \(X_{V_i}\), including using the input features of the neighboring attack point \((X_{v_i} = X_{B(v_i)})\), input features of a random neighbor, and the target’s input features. Empirically, we found no noticeable difference between these options since the embedding space distance as well as the feature space distance will converge to the similar place despite being initialized differently. During the optimization, we use the surrogate model \(f_\theta\) to compute \(h^{(L)}_{u_a}\) and \(h^{(L)}_{u_t}\).
6 Experiments
We run experiments on NVIDIA V100 GPUs, with 32GB memory limitation. Our evaluation aims to answer the following questions:
- Can our attacks poison existing GNN models and their fortified versions?
- Compared to MetaLDT, how effective is MimicLDT and can it scale to larger graphs?
- How do long-distance attacks compare to existing short-distance ones?
- Are our attacks stealthy? e.g. do the attacks impact graph homophily?
- Can we launch effective end-to-end attacks?
Datasets We use four graph datasets: Cora (Yang et al., 2016), Citeseer (Yang et al., 2016), PubMed (Yang et al., 2016) and Ogbn-arXiv (Hu et al., 2021). The largest graph, arXiv, is almost two orders of magnitude larger than the smallest, Cora. Appendix B provides details.
GNN models. We use three popular GNN models: GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2018). We use 3-layer models for most datasets, the exception is Cora, which is a small graph and for which we use a 2-layer model. We provide detailed model settings, the training process, and hyperparameters in Appendix B. In addition to vanilla models, we also evaluate our attacks against models that use the following 5 GNN defense mechanisms: ProGNN (Lin et al., 2020b), GNNGuard (Zhang & Zitnik, 2020), Soft-Median-GDC (Geisler et al., 2021b), Jaccard GCN (Wu et al., 2019) and SVD GCN (Entezari et al., 2020).
Comparison with short-distance attacks Existing attacks perturb the target’s k-hop neighborhood and are thus short-distance attacks. We compare against three short-distance attacks: Nettack (Zügner et al., 2018), FGA (Chen et al., 2018) and IG-FGSM (Wu et al., 2019). In all cases, we use a loss function designed for our goal of changing a target node’s label to a specified one (details in Appendix E.1).
6.1 Effectiveness of MetaLDT Attack
We evaluate MetaLDT on Cora. Table 1 reports MetaLDT’s poison success rate for different GNN models, both vanilla ones as well as their fortified versions. The poison success rate is calculated over 200 experiments, each with a randomly chosen target node and target poison label. We configure \(\Delta = 68\), which limits the number of changes on the adjacency matrix. For each step of adjacency matrix optimization, MetaLDT performs \(q = 1000\) optimization steps on injected nodes’ feature.
Table 1 shows that MetaLDT can achieve high attack success rate (84%~96%) over vanilla GNN models. When evaluating MetaLDT against robust models, we assume the attacker is aware of the defensive mechanism used and adapt MetaLDT accordingly (Mujkanovic et al., 2022). However, doing so comes at the cost of increased memory consumption and computational overhead. Hence, for some robust models (GNNGuard, SoftMedianGDC, ProGNN), we stop MetaLDT’s inner-training loop early before its convergence after 50 instead of the regular 200 epochs, in order to avoid OOM. From Table 1 we can see that when MetaLDT’s inner training loops is allowed converge (JaccardGCN, SVDGCN), its success rate remains high. However, stopping the inner training loop early comes at a significant cost of poison success rate. It is crucial that MetaLDT adapts to the underlying GNN defense. We report the results of nonadaptive MetaLDT in §C.5.
Comparing with short-distance attacks. Table 1 also shows the range of performance achieved by existing short-distance modification attacks, including Nettack-direct (modifying the target’s immediate neighbors), Nettack-indirect(modifying the target’s k-hop neighborhood), FGA and IG-FGSM. Detailed results are in §F (Table 14). We set the short-distance attack budget to be 68 and leave experiments with varying perturbation budgets in §E.2). From Table 1 we can see that short-distance attacks can achieve higher success rate, often at 100%. However, short-distance attacks are susceptible
| Vanilla | MetaLDT | MimicLDT | Short Distance |
|---------|---------|----------|----------------|
| GCN | 0.96 | 0.67 | 0.79—1.00 |
| GraphSAGE | 0.87 | 0.63 | 0.42—0.96 |
| GAT | 0.84 | 0.60 | 0.53—0.97 |
| Robust | | | |
| GNNGuard| (0.53) | 0.70 | 0.94—1.00 |
| SoftMedianGDC | (0.58) | 0.55 | 0.46—1.00 |
| JaccardGCN | 0.91 | 0.66 | 0.47—1.00 |
| SVDGCN | 0.83 | 0.74 | 0.18—1.00 |
| ProGNN | (0.55) | 0.59 | 0.60—1.00 |
**Table 1:** Success rate of MetaLDT, MimicLDT and short-distance attacks (Nettack, FGA and IG-FGSM) over Cora. Numbers in parentheses indicate cases where MetaLDT could not complete, and we instead ran a variant where inner-training runs for 50 epochs. More detailed numbers are in Appendix §F.
| Dataset | Nodes | Edges | Time(s) | Mem. | Time(s) | Mem. |
|---------|-------|-------|---------|------|---------|------|
| Cora | 2708 | 5429 | 82198.92| 2.91GB| 43.17 | 1.38GB|
| Citeseer| 3312 | 4536 | 82510.19| 3.01GB| 41.08 | 1.54GB|
| PubMed | 19717 | 44338 | — | OOM | 105.82 | 1.89GB|
| ArXiv | 169343| 1157799| — | OOM | 692.44 | 9.51GB|
**Table 2:** Total running time (in seconds) and GPU memory cost of generating one poisoned graph for various datasets on GCN model.
| Vanilla | Robust |
|---------|--------|
| GCN | |
| GraphSAGE | |
| GAT | |
| GNNGuard| |
| SoftMedianGDC | |
| JaccardGCN | |
| SVDGCN | |
| ProGNN | |
| Citeseer| 0.72 | 0.69 | 0.66 | 0.70 | 0.59 | 0.64 | 0.67 | 0.61 |
| PubMed | 0.71 | 0.69 | 0.69 | 0.70 | 0.56 | 0.67 | 0.60 | 0.57 |
| ArXiv | 0.74 | 0.73 | 0.70 | 0.64 | 0.59 | 0.63 | 0.62 | 0.58 |
**Table 3:** Poison success rate of MimicLDT. More detailed numbers are in Appendix §F.
| MetaLDT | MimicLDT |
|---------|----------|
| Degree changes | 0.0419±0.0055 |
| Homophily changes | 0.0142±0.0015 |
**Table 4:** Changes in the distribution of graph node degrees and homophily, measured using Earth Mover’s Distance, for the Cora dataset. The values represent the average distance between each poisoned graph and the original input. We report on other datasets in Appendix §D.2 and §D.3.
to GNN analysis tools such as GNNExplainer [Ying et al., 2019]. In §E.3, we show that GNNExplainer could detect the short-distance attacks with reasonable accuracy (0.45~0.85) and recall (0.32~0.84).
**MetaLDT is much more expensive than MimicLDT.** MetaLDT requires a lot of compute and memory resources due to its extensive unrolling process. Table. 2 compares the running time (on a V100) and the memory cost of MetaLDT and MimicLDT for the GCN model over various datasets. Not only MetaLDT is 3 orders-of-magnitude slower than MimicLDT, but it can only handle small graphs (Cora, Citeseer) of several thousands nodes due to OOM. Thus, unless otherwise mentioned, the rest of our evaluation uses MimicLDT and Arxiv, our largest dataset.
**Ablation Study and more analysis.** We perform extensive ablation study and leave its discussion to the Appendix. In particular, we study the effects of hyperparameters (§C.4), explore the design rationale of the optimization process (§C.1) and optimization constraints (§C.3). Finally, in §C.2, we show that there are benefits to optimizing the adjacency matrix. Nevertheless, MimicLDT’s heuristic of randomly connecting to existing nodes labelled with the target label is fast although imperfect alternative, reducing success rate to 79.5% from 96%.
### 6.2 Effectiveness of MimicLDT Attack
Because it is feasible, we run MimicLDT over a larger variety of datasets. Table. 3 shows MimicLDT’s poison success rate over Citeseer, PubMed and arXiv. For comparison with MetaLDT, we show MimicLDT’s Cora results in Table 1. We set the number of attack points to \( r \times |\mathcal{V}_L| \), where \( \mathcal{V}_L \) is the
set of labeled training nodes, and we use $r = 0.5\%$ for arXiv and PubMed, and $r = 1\%$ for Cora and Citeseer (which are smaller). MimicLDT injects a maximum of $\Delta = \Phi \times r \times |V_L|$ nodes, and we use $\Phi = 4$ for all experiments. Due to time constraint, instead of training a surrogate model, our experiments directly use the weights of models under attack. We have evaluated both ways over Cora and found they result in similar success rates. As shown in Table.3, MimicLDT can achieve decent poison success rate, for vanilla models (66%~74%) as well as robust models (56%~70%).
**Effect of varying the number of attack points:** We study the effect of varying the number of attack points. Fig.3 shows poison success rate as $r$ varies (while keeping $\Phi$ fixed). Increasing $r$, and thus the number of attack points, improves attack success rate. §D.1 studies the effects of varying $\Phi$. The total number of injected nodes is determined by both $r$ and $\Phi$.
**Attack stealthiness: degree distribution:** We examine whether poisoned graphs can preserve the node degree distribution. We measure the changes to degree distribution using the Earth Mover’s Distance (EMD) metric. The average distance between each poisoned graph with original clean graph for Cora is $0.039 \pm 0.002$. Statistics on other datasets can be found in §D.2. The attacks only cause slight changes on the node degree distribution.
**Attack stealthiness: homophily:** The second term of MimicLDT’s loss function (Eq.5) keeps injected nodes similar to the attack points they attach to. It serves a similar goal as prior work [Chen et al., 2022], which is to ensure that injected nodes do not significantly impact graph homophily. The hyperparameter $\beta$ controls the importance of the second term. Figure 4 measures the similarity of neighboring and non-neighboring nodes in the arXiv graph, and compare them to the similarity between injected nodes and their attack points with varying $\beta$. Appendix D.3 gives the detailed setup. We can observe that the larger the value of $\beta$, the more similar injected nodes appear to their attack points. As we note in §5, we use feature vector similarity as a proxy for the standard node-centric homophily metric [Chen et al., 2022]. In §D.3 we show this does not affect the homophily results.
### 6.3 End-to-end attacks
The attacks generated by MetaLDT and MimicLDT inject fake nodes whose features lie in continuous space. Thus, they are not end-to-end attacks for graphs with discrete features, such as citation graphs whose raw node features are natural language texts. Thus, an end-to-end attack needs to inject nodes with textual features. We extend our design to perform such an attack.
Suppose some language model such as SciBERT [Beltagy et al., 2019] is used to encode a node’s raw texts to an embedding vector in continuous space. Our extension trains a decoder that can generate texts given an embedding vector, which corresponds to some fake node’s feature as computed by MimicLDT (or MetaLDT). We provide more details on the design and evaluation of end-to-end attack in Appendix G and give example texts generated for the fake nodes (§G Fig.11).
### 7 Conclusion
Our work shows that GNNs are susceptible to, long-distance injection attacks, a type of attack that (to the best of our knowledge) have not been investigated in the past. When compared to short-distance attacks, where the attacker modifies the target’s neighborhood, long-distance attacks require injecting a larger number of nodes. However, detecting these nodes is challenging, they lie outside the target’s k-hop neighborhood and thus defender must consider the influence of all nodes in the graph.
References
Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. arXiv 1903.10676, 2019.
Yoshua Bengio. Gradient-based optimization of hyperparameters. Neural Computation, 12(8): 1889–1900, 2000. doi: 10.1162/089976600300015187.
Aleksandar Bojchevski and Stephan Günnemann. Adversarial attacks on node embeddings via graph poisoning. In International Conference on Machine Learning (ICML), 2019.
Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, and Junzhou Huang. A restricted black-box adversarial framework towards attacking graph embedding models. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 3389–3396, 2020.
Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797, 2018.
Jinyin Chen, Yixian Chen, Haibin Zheng, Shijing Shen, Shanqing Yu, Dan Zhang, and Qi Xuan. Mga: momentum gradient attack on network. IEEE Transactions on Computational Social Systems, 8(1): 99–109, 2020.
Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, and James Cheng. Understanding and improving graph injection attack by promoting unnoticeability. In International Conference on Learning Representations (ICLR), 2022.
Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. Unnoticeable backdoor attacks on graph neural networks. 2023.
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. Adversarial attack on graph structured data. In International conference on machine learning (ICML), 2018.
Alexandre Duval and Fragkiskos D Malliaros. Graphsvx: Shapley value explanations for graph neural networks. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part II 21, pp. 302–318. Springer, 2021.
Negin Entezari, Saba A Al-Sayouri, Amirali Darvishzadeh, and Evangelos E Papalexakis. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 169–177, 2020.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, 2017.
Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, and Stephan Günnemann. Robustness of graph neural networks at scale. In Advances in Neural Information Processing Systems, 2021a.
Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, and Stephan Günnemann. Robustness of graph neural networks at scale. In Neural Information Processing Systems (NeurIPS 2021), 2021b.
Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Neural Information Processing Systems (NIPS 2017), 2017.
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv 2005.00687, 2021.
|
ikmuHqugN7
|
SCALING CONVEX NEURAL NETWORKS WITH BURER-MONTEIRO FACTORIZATION
Arda Sahiner
Arcus Inc.
Stanford University
Tolga Ergen
LG AI Research
Batu Ozturkler
Stanford University
John Pauly
Stanford University
Morteza Mardani
NVIDIA Corporation
Mert Pilanci
Stanford University
ABSTRACT
It has been demonstrated that the training problem for a variety of (non) linear two-layer neural networks (such as two-layer perceptrons, convolutional networks, and self-attention) can be posed as equivalent convex optimization problems, with an induced regularizer which encourages low rank. However, this regularizer becomes prohibitively expensive to compute at moderate scales, impeding training convex neural networks. To this end, we propose applying the Burer-Monteiro factorization to convex neural networks, which for the first time enables a Burer-Monteiro perspective on neural networks with non-linearities. This factorization leads to an equivalent yet computationally tractable non-convex alternative with no spurious local minima. We develop a novel relative optimality bound of stationary points of the Burer-Monteiro factorization, providing verifiable conditions under which any stationary point is a global optimum. For the first time, we show that linear self-attention with sufficiently many heads has no spurious local minima.
1 INTRODUCTION
It has been demonstrated that the training problem for (non-linear) two-layer neural networks are equivalent to convex programs (Pilanci & Ergen [2020], Ergen & Pilanci [2020], Sahiner et al. [2021b], Ergen et al. [2021], Sahiner et al. [2021a]). This has been observed for a variety of architectures, including multi-layer perceptrons (MLPs) (Pilanci & Ergen [2020], Sahiner et al. [2021b]), convolutional neural networks (CNNs) (Ergen & Pilanci [2020], Sahiner et al. [2021c]), and self-attention based transformers (Sahiner et al. [2022]). A major benefit of convex training of neural networks is that global optimality is guaranteed, which brings transparency to training neural networks.
The convex formulation of neural networks induces biases by regularization of the network weights. For linear activation, the convex model directly imposes nuclear-norm regularization which is well-known to encourage low-rank solutions (Recht et al. [2010]). For ReLU activation, however, the convex model induces a type of nuclear norm which promotes sparse factorization while the left factor is constrained to an affine space (Sahiner et al. [2021b]). This constrained nuclear-norm is NP-hard to compute. This impedes the utility of convex neural networks for ReLU activation. To address this computational challenge, we seek a method which (i) inherits the per-iteration complexity of non-convex training of neural network, and (ii) inherits the optimality guarantees and transparency of convex training. To find a solution, we leverage the well-studied Burer-Monteiro (BM) factorization (Burer & Monteiro [2003]).
BM has been applied as an efficient solution strategy for problems ranging from matrix factorization (Zheng & Lafferty [2016], Park et al. [2017], Ge et al. [2017], Gillis [2017]) to rank minimization (Mardani et al. [2013], Recht et al. [2010], Wang et al. [2017]) and matrix completion (Mardani et al. [2015], Ge et al. [2017]). BM has also been used for over-simplified neural networks such as (Kawaguchi [2016], Haeffele & Vidal [2017], Du & Lee [2018]), where optimality conditions for local minima are provided. However, no work has deployed BM factorization for practical non-linear neural networks, and no guarantees are available about the optimality of stationary points.
Thus, our focus in this work is to adapt BM for practical two-layer (non-linear) convex neural networks. We consider three common architectures, namely MLPs, CNNs, and self-attention networks. For these scenarios, we develop verifiable relative optimality bounds for all local minima and stationary points, which are easy and interpretable. In light of these conditions, we identify useful insights about the nature of neural networks contributing to optimality. In particular, we observe that for self-attention networks all local minima coincide with the global optima if there are sufficiently many heads. The optimality guarantees also provide useful algorithmic insights, allowing one to verify whether the light-weight first-order methods such as SGD achieve the global optimum for the non-convex training of neural networks. Our experiments with image classification task indicate that this BM factorization enables layerwise training of convex CNNs, which allows for convex networks for the first time to match the performance of multi-layer end-to-end trained non-convex CNNs.
1.1 Contributions
All in all, our contributions are summarized as follows:
• We propose the BM factorization for efficiently solving convex neural networks with ReLU activation for moderate and large scales. This is the first time BM theory has been applied to the non-linear neural network setting to the best of our knowledge.
• We derive a novel bound on the relative optimality of the stationary points of the BM factorization for neural networks.
• We identify simple and verifiable conditions which guarantee a stationary point of the non-convex BM formulation achieves the global optimum of the convex neural network.
• We provide insights into the fundamental building blocks of neural networks that contribute to optimality; e.g. that linear self-attention has no spurious local minima if it has sufficiently many heads.
• Our experiments verify the proposed relative optimality bound of stationary points from the BM factorization, and uncovers cases where SGD converges to saddle points, even in two-layer neural networks.
1.2 Related Work
Burer-Monteiro factorization. The Burer-Monteiro (BM) factorization was first introduced in (Burer & Monteiro, 2003, 2005). There has been a long line of work studying the use of this factorization for solving SDPs (Boumal et al., 2016; Cifuentes & Moitra, 2019; Waldspurger & Waters, 2020; Erdogdu et al., 2021). In the rectangular matrix case, gradient descent converges to a global optimum of the matrix factorization problem with high probability for certain classes of matrices (Zheng & Lafferty, 2016). The BM factorization has been also studied in the rectangular case in more generic settings (Bach et al., 2008; Haeffele et al., 2014; Haeffele & Vidal, 2017).
Nuclear norm and rank minimization. The ability of nuclear norm regularization to induce low rank has been studied extensively in compressed sensing (Candes & Recht, 2009; Recht et al., 2010; Candès & Tao, 2010). BM factorization has been applied to scale up nuclear-norm minimization (Mardani et al., 2015, 2013). It has also been deployed for low-rank matrix factorization (Cabral et al., 2013; Zhu et al., 2017; Park et al., 2017; Ge et al., 2017). The results show that all second-order critical points of the BM factorization are global optima if certain qualification conditions are met.
SGD for non-convex neural networks. It has been shown that for over-parameterized two-layer linear networks, all local minima are global minima (Kawaguchi, 2016). Accordingly, a line of work has attempted to show that gradient descent or its modifications provably find local minima and escape saddle points (Ge et al., 2015; Lee et al., 2016; Jin et al., 2017; Daneshmand et al., 2018). However, these works assume Lipschitz gradients and Hessians of the non-convex objective, which is not typically satisfied. Another line of work shows that gradient descent converges to global optima for sufficiently highly over-parameterized neural networks, with either the parameter count being a high-order polynomial of the sample count (Du et al., 2018, 2019; Arora et al., 2019), or the network architecture being simple (Du & Lee, 2018). In practice, it has been empirically observed that SGD can converge to local maxima, or get stuck in saddle points (Du et al., 2017; Ziyin et al., 2021). For
unregularized matrix factorization, it has also recently been shown that randomly initialized gradient descent provably converges to global minima (Ye & Du, 2021).
Convex neural networks. There is a long history of architecting convex optimization problems that mimic the performance of neural networks (Zhang et al., 2016, 2017). It has recently been found that ReLU neural networks have equivalent convex programs for training, such as networks with scalar outputs (Pilanci & Ergen, 2020), vector-outputs (Sahiner et al., 2021b), convolutional networks (Ergen & Pilanci, 2020; Sahiner et al., 2021c), polynomial-activation networks (Bartan & Pilanci, 2021), batch-norm based networks (Ergen et al., 2021), Wasserstein GANs (Sahiner et al., 2021a), and self-attention networks (Sahiner et al., 2022). Despite efforts in developing efficient solvers, convex networks are only effectively trainable at small scales (Bai et al., 2022; Mishkin et al., 2022). Our novelty is to adapt BM factorization as a fast and scalable solution for training convex networks, with simple, verifiable conditions for global optimality.
2 PRELIMINARIES
We denote $(\cdot)_+ := \max\{0, \cdot\}$ as the ReLU non-linearity. We use superscripts, say $A^{(i_1,i_2)}$, to denote blocks of matrices, and brackets, say $A[i_1,i_2]$, to denote elements of matrices. We let $1$ be the vector of ones of appropriate size, $\|\cdot\|_H$ be the $\ell_p$ norm with $p = H$, $\|\cdot\|_F$ be the Frobenius norm, and $B_H$ be the unit $H$-norm ball, $\{u : \|u\|_H \leq 1\}$. Unless otherwise stated, let $F$ be a convex, differentiable function. We use $n$ to denote the number of samples, and $c$ to denote the output dimension of each network. All proofs are presented in Appendix A.
2.1 TWO-LAYER NEURAL NETWORKS AS CONVEX PROGRAMS
A line of work has demonstrated that two-layer neural networks are equivalent to convex optimization problems. We consider a data matrix $X \in \mathbb{R}^{n \times d}$ and consider two-layer $\sigma$-activation network with $c$ outputs, $m$ neurons, weight-decay parameter $\beta > 0$:
$$p^*_{MLP} := \min_{W_1 \in \mathbb{R}^{d \times m}, W_2 \in \mathbb{R}^{c \times m}} F(\sigma(XW_1)W_2^\top) + \frac{\beta}{2} \sum_{j=1}^{m} (\|w_{1j}\|_2^2 + \|w_{2j}\|_2^2).$$
When $\sigma$ is a linear activation and $m \geq m^*$ for some $m^* \leq \min\{d,c\}$, this problem is equivalent to (Rennie & Srebro, 2005, Section 2.2)
$$p^*_{MLP} = \min_{Z \in \mathbb{R}^{d \times c}} F(XZ) + \beta \|Z\|_*,$$
whereas for a ReLU activation and $m \geq m^*$ for some unknown, problem-dependent $m^* \leq nc$ (Sahiner et al., 2021b, Thm. 3.1),
$$p^*_{RMPL} = \min_{Z_j \in \mathbb{R}^{d \times c}} F(\sum_{j=1}^{P} D_j X Z_j) + \beta \sum_{j=1}^{P} \|Z_j\|_{*,K_j}, K_j := (2D_j - I_n)X$$
where $\{D_j\}_{j=1}^{P} = \{\text{diag}(1\{Xu \geq 0\}) : u \in \mathbb{R}^d\}$ enumerates the possible activation patterns generated from $X$, and the number of such patterns satisfies $P \leq 2r \left(\frac{c(n-1)}{r}\right)^r$, where $r := \text{rank}(X)$ (Stanley et al., 2004; Pilanci & Ergen, 2020). The expression (3) also involves a constrained nuclear norm expression, which is defined as
$$\|Z\|_{*,K} := \min_{t \geq 0} t \text{ s.t. } Z \in tC, C := \text{conv}\{Z = uv^\top : Ku \geq 0, \|u\|_2 \leq 1, \|v\|_2 \leq 1\}.$$
This norm is a quasi-nuclear norm, which differs from the standard nuclear norm in that the factorization upon which it relies imposes a constraint on its left factors. In convex ReLU neural networks, this norm enforces the existence of $\{u_k,v_k\}$ such that $Z = \sum_k u_kv_k^\top$ and $D_j X Z = \sum_k (Xu_k)_+ v_k^\top$, and penalizes $\sum_k \|u_kv_k\|_*$. This norm is NP-hard to compute (Sahiner et al., 2021b). A variant of these ReLU activations, called gated ReLU activations, achieves the piecewise linearity of ReLU activations without enforcing the constraints (Fiat et al., 2019). Specifically, the ReLU gates are fixed to some $\{h_j\}_{j=1}^{P}$ to form
$$\sigma(Xw_{1j}) := \text{diag}(1\{Xh_j \geq 0\})(Xw_{1j}) = D_j X w_{1j}.$$
With gated ReLU activation, the equivalent convex program is given by ([Sahiner et al., 2022])
$$p^*_{GMLP} = \min_{Z_j \in \mathbb{R}^{d \times c}} F\left(\sum_{j=1}^{P} D_j X Z_j\right) + \beta \sum_{j=1}^{P} \|Z_j\|_*, \tag{5}$$
which thereby converts the constrained nuclear norm penalty to a standard nuclear norm penalty, improving the complexity of the ReLU network. In addition to the multi-layer perceptron (MLP) formulation, two-layer ReLU-activation convolutional neural networks (CNNs) with global average pooling have been demonstrated to be equivalent to convex programs as well ([Sahiner et al., 2021b]; [Ergen & Pilanci, 2020]). The non-convex formulation is given by
$$p^*_{RCNN} := \min_{w_{1j}, w_{2j} \in \mathbb{R}^h} \sum_{i=1}^{n} F\left(\sum_{j=1}^{m} w_{2j} 1^\top (X_i w_{1j})_+\right) + \frac{\beta}{2} \sum_{j=1}^{m} \|w_{1j}\|_2^2 + \|w_{2j}\|_2^2, \tag{6}$$
where samples $X_i \in \mathbb{R}^{K \times h}$ are represented by patch matrices, which hold a convolutional patch of size $h$ in each of their $K$ rows. It has been shown ([Sahiner et al., 2021b]) that as long as $m \geq m^*$ where $m^* \leq nc$, this is equivalent to a convex program ([Sahiner et al., 2021b], Cor. 5.1)
$$p^*_{RCNN} = \min_{Z_j \in \mathbb{R}^{K \times c}} \sum_{i=1}^{n} F\left(\sum_{j=1}^{P} 1^\top D_j^{(i)} X_i Z_j^\top\right) + \beta \sum_{j=1}^{P} \|Z_j\|_{*,K_j} \tag{7}$$
$$K_j := (2D_j - I_n K)X, \quad X := \begin{bmatrix} X_1 \\ \vdots \\ X_n \end{bmatrix}$$
where $\{D_j\}_{j=1}^{P} = \{\text{diag}(1\{Xu \geq 0\}) : u \in \mathbb{R}^h\}$ and $D_j^{(i)} \in \mathbb{R}^{K \times K}$. The only exponential dependence of $P$ is on $h$, which is typically fixed.
Lastly, we review existing convexity results for self-attention transformers ([Sahiner et al., 2022]). We have the following non-convex objective for a single block of multi-head self-attention with $m$ heads, where $X_i \in \mathbb{R}^{s \times d}$ with $s$ tokens and $d$ features
$$p^*_{SA} := \min_{W_{1j}, W_{2j} \in \mathbb{R}^{d \times c}} \sum_{i=1}^{n} F\left(\sum_{j=1}^{m} \sigma(X_i W_{1j} X_i^\top) X_i W_{2j}\right) + \frac{\beta}{2} \sum_{j=1}^{m} \|W_{1j}\|_F^2 + \|W_{2j}\|_F^2, \tag{8}$$
for which a variety of objectives $F$ can be posed, including classification (e.g., $F$ incorporates global average pooling followed by softmax-cross-entropy with labels) or denoising (e.g., $F$ is a squared loss). For linear, gated ReLU, and ReLU activation, this is equivalent to a convex program (see Appendices A.12, B.3). Here, we show that these architectures are amenable to the BM factorization.
### 2.2 The Burer-Monteiro Factorization
First proposed by [Burer & Monteiro, 2003], the Burer-Monteiro (BM) factorization proposes to solve SDPs over some square matrix $Q$ in terms of rectangular factors $R$ where $Q$ is substituted by $RR^\top$. It was first demonstrated that solving over $R$ does not introduce spurious local minima rank-constrained SDPs, given $\text{rank}(R) \geq \text{rank}(Q^*)$ for optimal solution to the original SDP $Q^*$ ([Burer & Monteiro, 2005]). We seek applications where we optimize over a non-square matrix $Z$, i.e.
$$p^*_{CVX} := \min_{Z \in \mathbb{R}^{d \times c}} F(Z) \tag{9}$$
for a convex, differentiable function $F$. One may approach this by factoring $Z = UV^\top$, where $U \in \mathbb{R}^{d \times m}$, $V \in \mathbb{R}^{c \times m}$ for some arbitrary choice $m$. Then, we have an equivalent non-convex problem over $R := \begin{bmatrix} U \\ V \end{bmatrix}$, for $f(R) = F(UV^\top)$:
$$p^*_{CVX} = \min_{R} f(R). \tag{10}$$
Noting that (9) is convex over $RR^\top = \begin{bmatrix} UU^\top & UV^\top \\ VU^\top & VV^\top \end{bmatrix}$, one may apply directly the result of [Boumal et al., 2020] to conclude that as long as $m \geq d + c$, all local minima of (10) are global
minima of (9) (see Appendix A.2). Further, work from Bach et al. (2008) and Haeffele et al. (2014) demonstrates that all rank-deficient local minimizers of (10) achieve the global minimum \( p^*_{CVX} \).
A long line of work has analyzed the conditions where known non-convex optimization algorithms will converge to second-order critical points (local minima) (Ge et al., 2015; Jin et al., 2017; Daneshmand et al., 2018). Under the assumption of a bounded \( f \) and its Hessian, a second-order critical point can be found by noisy gradient descent (Ge et al., 2015), or other second-order algorithms (Sun et al., 2015). Even vanilla gradient descent with random initialization has been demonstrated to almost surely converge to a local minimum for \( f \) with Lipschitz gradient (Lee et al., 2016). However, if the gradient of \( f \) is not Lipschitz-continuous, there are no guarantees that gradient descent will find a second-order critical point of (10): one may encounter a stationary point which is a saddle. For example, in the linear regression setting, i.e.
\[
f(R) = \|XUV^\top - Y\|_F^2,
\]
the gradient of \( f \) is Lipschitz continuous with respect to \( U \) when \( V \) is fixed and vice-versa, but not Lipschitz continuous with respect to \( R \) (Mukkamala & Ochs, 2019). Thus, one may not directly apply the results of Ge et al. (2015); Sun et al. (2015); Lee et al. (2016) in this case. Instead, we seek to understand the conditions under which stationary points to (10) correspond to global optima of (9). One such condition is given in Mardani et al. (2013, 2015).
**Theorem 2.1** (From Mardani et al. (2013)). Stationary points \( \hat{U}, \hat{V} \) of the optimization problem
\[
p^* := \min_{U,V} \frac{1}{2} \|UV^\top - Y\|_F^2 + \frac{\beta}{2} (\|U\|_F^2 + \|V\|_F^2)
\]
correspond to global optima \( Z^* = \hat{U}\hat{V}^\top \) of the equivalent convex optimization problem
\[
p^* = \min_Z \frac{1}{2} \|Z - Y\|_F^2 + \beta \|Z\|_*
\]
provided that \( \|Y - \hat{U}\hat{V}^\top\|_2 \leq \beta \).
### 3 Burer-Monteiro Factorization for Convex Neural Networks
#### 3.1 MLPs
We first seek to compare the convex formulations of the MLP training problem (2), (3), and (5) to their BM factorizations. We describe how to find the BM factorization for any convex MLP.
**Lemma 3.1.** For any matrix \( M \in \mathbb{R}^{n \times d_c} \), let \( f(U, V) := F(MUV^\top) \) be a differentiable function. For any \( \beta > 0 \) and arbitrary vector norms \( \| \cdot \|_R \) and \( \| \cdot \|_C \), we define the Burer-Monteiro factorization
\[
p^* := \min_{U \in \mathbb{R}^{d_c \times m}, V \in \mathbb{R}^{d_r \times m}} f(U, V) + \frac{\beta}{2} \left( \sum_{j=1}^m \|u_j\|_C^2 + \|v_j\|_R^2 \right).
\]
For the matrix norm \( \| \cdot \|_D \) defined as
\[
\|Z\|_D := \max_R \text{trace}(R^\top Z) \text{ s.t. } u^\top Rv \leq 1 \forall u \in B_C, \forall v \in B_R,
\]
the problem (14) is equivalent to the convex optimization problem
\[
p^* = \min_{Z \in \mathbb{R}^{d_c \times d_r}} F(MZ) + \beta \|Z\|_D.
\]
**Remark 3.2.** In the case of a linear MLP, \( M = X \), \( d_c = d \), \( d_r = c \), and \( \| \cdot \|_D = \| \cdot \|_* \), so using the definition of \( \| \cdot \|_D \), in the corresponding BM factorization, \( R = 2 \) and \( C = 2 \) (Bach et al., 2008). For a gated ReLU network, the regularizer is still the nuclear norm, and thus the same \( R = C = 2 \) regularization appears in the BM factorization. In the case of the ReLU MLP, the nuclear norm is
---
1Under mild conditions, see Appendix A.3
replaced by \( \| \cdot \|_D = \sum_{j=1}^{P} \| \cdot \|_{*,K_j} \), which in the BM factorization amounts to having the constraint \( K_j U_j \geq 0 \). We express the BM factorization of convex MLPs below.
\[
p^*_{LMPLP} = \min_{U \in \mathbb{R}^{d \times m}, V \in \mathbb{R}^{c \times m}} F(XUV^\top) + \frac{\beta}{2} \left( \|U\|_F^2 + \|V\|_F^2 \right)
\] (17)
\[
p^*_{GMPLP} = \min_{U_j \in \mathbb{R}^{d \times m}, V_j \in \mathbb{R}^{c \times m}} F(\sum_{j=1}^{P} D_j X U_j V_j^\top) + \frac{\beta}{2} \sum_{j=1}^{P} \left( \|U_j\|_F^2 + \|V_j\|_F^2 \right)
\] (18)
\[
p^*_{RMPLP} = \min_{U_j \in \mathbb{R}^{d \times m}, V_j \in \mathbb{R}^{c \times m}} F(\sum_{j=1}^{P} D_j X U_j V_j^\top) + \frac{\beta}{2} \sum_{j=1}^{P} \left( \|U_j\|_F^2 + \|V_j\|_F^2 \right)
\] (19)
To the best of our knowledge, (19) presents the first application of BM factorization to a non-linear neural network, which is enabled by the convex model (3).
In the linear case, the BM factorization (17) is identical to the original non-convex formulation of a linear MLP with \( m \) neurons. In the case of gated ReLU, the BM factorization when \( m = 1 \) is equivalent to the original non-convex formulation. However, for ReLU activation two-layer networks, the BM factorization even when \( m = 1 \) corresponds to a different (i.e., constrained, rather than ReLU activation) model than the non-convex formulation. While the original convex program is NP-hard due to the quasi-nuclear norm (Sahmer et al., 2021b), the per-iteration complexity of the BM factorization is much lower than for the convex ReLU MLP.
The BM factorizations of these convex MLPs are non-convex, hence finding a global minimum appears intractable. However, the following theorem demonstrates that as long as a rank-deficient local minimum to the BM factorization is obtained, it corresponds to a global optimum.
**Theorem 3.3.** If \( m \geq d_c + d_r \), all local minima of the BM factorization (14) are global minima.
Furthermore, if \( F \) is twice-differentiable, any rank-deficient local minimum \( \hat{R} := \begin{bmatrix} \hat{U} \\ \hat{V} \end{bmatrix} \) of (14) corresponds to a global minimizer \( Z^* = \hat{U} \hat{V}^\top \) of (16).
This result demonstrates that these two-layer convex MLPs have no spurious local minima under mild conditions. However, there remains an algorithmic challenge: it is not straightforward to obtain a guaranteed local minima when the gradients of \( f \) are not Lipschitz continuous. The following result provides a general condition under which stationary points of the (14) are global optima of (16).
**Theorem 3.4.** For any non-negative objective function \( F \), for a stationary \( (\hat{U}, \hat{V}) \) of (14) with corresponding \( \hat{Z} = \hat{U} \hat{V}^\top \) with objective \( \hat{p} \) for (16), the relative optimality gap \( \frac{\hat{p} - p^*}{p^*} \) satisfies
\[
\frac{\hat{p} - p^*}{p^*} \leq \left( \frac{\|\nabla_Z F(M\hat{Z})\|_D^*}{\beta} - 1 \right)^+
\] (20)
where \( \| \cdot \|_D^* \) is the dual norm of \( \| \cdot \|_D \).
This bound can be calculated by taking the gradient of the unregularized objective function, evaluated at candidate solution \( \hat{Z} \) to the convex problem (16), which is formed by the stationary point of BM problem (14). In the case of a linear MLP with \( X = I_d \), \( F \) a squared-loss objective, and \( \|\nabla_Z F(M\hat{Z})\|_D^* \leq \beta \), our result exactly recovers the result of Theorem 2.1 from Mardani et al. (2013). When this condition is not exactly satisfied, (20) provides a novel result in the form of an optimality gap bound. To our knowledge, this is the first result that generalizes the optimality conditions for stationary points from any BM factorization of a neural network. This provides an easily computable bound after solving (14) which quantifies how close a solution is to the global minimum. In the case of a ReLU MLP, the relative optimality gap is given by
\[
\frac{\hat{p} - p^*}{p^*} \leq \max_{j \in [P], u \in B_j} \frac{1}{\beta} \left( \|\nabla_Z F(\sum_{j'=1}^{P} D_{j'} X \hat{Z}_{j'})u\|_2 - 1 \right)^+
\]
Computing this quantity requires solving a cone-constrained PCA problem (Deshpande et al., 2014). In certain cases, the optimality gap of stationary points (20) is always zero.
**Theorem 3.5.** A stationary point \((\hat{U}, \hat{V})\) of (14) is a global minimizer of (16) if \(R = C = 2\) and
\[
\text{rank}(\hat{U}) = \text{rank}(\hat{V}) = \min\{d_c, d_r\}.
\] (21)
Thus, for linear and gated ReLU MLPs, we can ensure that if the Burer-Monteiro factorization achieves a stationary point with full rank, it corresponds with the global optimum of the convex program. We now can further extend these results to CNNs and self-attention architectures.
### 3.2 CNNs
Before proceeding to explore the BM factorization in the context of two-layer CNNs, we first provide a new result on an equivalent convex program for two-layer ReLU CNNs with arbitrary linear pooling operations, which extends the results of Sahiner et al. (2021b); Ergen & Pilanci (2020) on Global Average Pooling CNNs. Define \(P_a \in \mathbb{R}^{a \times a}\) to be a linear pooling matrix which pools the \(K\) spatial dimensions to an arbitrary size \(a\). Then, we express the non-convex two-layer CNN problem as
\[
p_{CNN}^* := \min_{w_{1j} \in \mathbb{R}^{h \times a}, w_{2j} \in \mathbb{R}^{c \times a}} \sum_{i=1}^{n} F \left( \sum_{j=1}^{m} W_{2j} P_a \sigma(X_i w_{1j}) \right) + \frac{\beta}{2} \sum_{j=1}^{m} \|w_{1j}\|_2^2 + \|W_{2j}\|_F^2. \tag{22}
\]
**Theorem 3.6.** For \(\beta > 0\) and ReLU activation \(\sigma(\cdot) = (\cdot)_+\), if \(m \geq m^*\) where \(m^* \leq nac\), then (22) is equivalent to a convex optimization problem, given by
\[
p_{CNN}^* = \min_{Z_k \in \mathbb{R}^{h \times a}} \sum_{i=1}^{n} F \left( \sum_{k=1}^{P} \begin{bmatrix} \text{trace}(P_a D_k^{(i)} X_i Z_k^{(1)}) \\ \vdots \\ \text{trace}(P_a D_k^{(i)} X_i Z_k^{(c)}) \end{bmatrix} \right) + \beta \sum_{k=1}^{P} \|Z_k\|_{*,K_k}, \tag{23}
\]
where \(K_k := (2D_k - I_n)^{-1}\) and \(Z_k^{(c')} \in \mathbb{R}^{h \times a}\) for all \(c' \in [c]\).
Thus, we provide a novel result which characterizes two-layer CNNs with arbitrary linear pooling operations as a convex program. Similar results can be shown for the linear and gated-ReLU activation cases.\(^2\) With this established, we present our main results on the BM factorization for CNNs.
**Lemma 3.7.** The BM factorization of the convex CNN problem with ReLU activation is given as:
\[
p_{RCNN}^* = \min_{u_{jk} \in \mathbb{R}^{h \times a}, v_{jk} \in \mathbb{R}^{c \times a}} \sum_{i=1}^{n} F \left( \sum_{k=1}^{P} \sum_{j=1}^{m} V_{jk} P_a D_k^{(i)} X_i u_{jk} \right) + \frac{\beta}{2} \sum_{k=1}^{P} \sum_{j=1}^{m} (\|u_{jk}\|_F^2 + \|V_{jk}\|_F^2), \tag{24}
\]
subject to \((2D_k^{(i)} - I) X_i u_{jk} \geq 0\) for all \(j \in [m]\) and \(k \in [P]\).
The BM factorization closely resembles the original non-convex formulation (22). Generally, (24) inherits the results of Theorems 3.3, 3.4, and 3.5; we present one such corollary here.
**Corollary 3.8.** A stationary point \((\hat{u}_{jk}, \hat{V}_{jk})_{j=1}^{m}_{k=1}^{P}\) of (24) corresponds to a global minimizer \(\hat{Z}_k = \sum_{j=1}^{m} \hat{u}_{jk} \text{vec}(\hat{V}_{jk})^\top\) of (23) provided that
\[
\|\sum_{i=1}^{n} \nabla_{Z_k} F \left( \sum_{k'=1}^{P} \begin{bmatrix} \text{trace}(P_a D_k^{(i)} X_i Z_k^{(1)}) \\ \vdots \\ \text{trace}(P_a D_k^{(i)} X_i Z_k^{(c)}) \end{bmatrix} \right)\|_2 \leq \beta, \quad \forall k \in [P], \forall u \in B_z : (2D_k^{(i)} - I) X_i u \geq 0. \tag{25}
\]
\(^2\)We examine linear and gated ReLU activations for CNNs in Appendix B.2.
3.3 Multi-Head Self-Attention
We now for the first time extend BM factorization theory to self-attention networks.
**Lemma 3.9.** The BM factorization of the convex linear-activation\(^3\) self-attention problem is given as:
\[
p^*_{LSA} = \min_{U_j \in \mathbb{R}^{d \times d}, V_j \in \mathbb{R}^{d \times c}} \sum_{i=1}^{n} F \left( \sum_{j=1}^{m} X_i U_j X_i^\top X_i V_j \right) + \frac{\beta}{2} \sum_{j=1}^{m} \|U_j\|_F^2 + \|V_j\|_F^2.
\]
In addition to inheriting all of the results of Theorems 3.3, 3.4, and 3.5 noting the equivalence of the BM factorization with the original non-convex program (18), we are the first to show conditions under which there are no spurious local minima for self-attention networks.
**Corollary 3.10.** The linear-activation self-attention network (26) has no spurious local minima as long as the number of heads satisfies \(m \geq m^*\) where \(m^* \leq d^2 + dc\). Furthermore, for any twice-differentiable objective \(F\), if for any local minimum \((\hat{U}_j, \hat{V}_j)_{j=1}^{m}\) of (26), the matrix
\[
\hat{R} := \begin{bmatrix}
\text{vec}(\hat{U}_1) & \cdots & \text{vec}(\hat{U}_m) \\
\text{vec}(\hat{V}_1) & \cdots & \text{vec}(\hat{V}_m)
\end{bmatrix} \in \mathbb{R}^{(d+c) \times m}
\]
is rank-deficient, then this local minimum is also a global minimum of (104).
4 Experimental Results

(a) CIFAR-10

(b) Fashion-MNIST
Figure 1: BM enables layerwise training of convex ReLU CNNs, which are competitive with end-to-end ReLU networks of the same depth. For CIFAR-10, with \(m = 4\) we achieve a test accuracy of 81.5% compared to 81.6% for end-to-end non-convex training, and for Fashion-MNIST, with \(m = 4\) we achieve a test accuracy of 91.6% compared to 91.2% for non-convex training (Kılıçarslan & Çelik, 2021; Bhatnagar et al., 2017).
4.1 The Relative Optimality Gap Bound
In this section, we illustrate the utility of our proposed relative optimality bound for stationary points in the setting of two-layer fully-connected networks. We also seek to examine how this bound changes with respect to the number of samples \(n\), the regularization parameter \(\beta\) (which controls the sparsity of the convex solution), and the number of factors in the BM factorization \(m\). We initialize a class-balanced three-class spiral data set with varied number of samples \(n\) (see Figure 3 in Appendix C.1 for examples). For this dataset, we then train the gated ReLU MLP BM factorization (18) with varying number of factors \(m\). We then compare the stationary points of these BM factorizations found by gradient descent (GD) to the global optimum, which we compute from (5).
For each stationary point of the BM factorization, we compute the relative optimality gap bound provided in our result in Theorem 3.4. We note that since \(d = 2, c = 3\) in this case, for all \(j\), as long as \(m \geq 5\) all local minima of the BM factorization are global minima (Burer & Monteiro, 2005; Haeffele et al., 2014). While Lee et al. (2016) demonstrated that gradient descent
---
\(^3\)We examine gated ReLU and ReLU activations for self-attention in Appendix B.3
with a random initialization converges to a local optimum almost surely for losses whose gradient is Lipschitz continuous, we use squared loss with one-hot-encoded class labels, which is not Lipschitz continuous (Mukkamala & Ochs, 2019). Thus, there is no guarantee that GD will find the global minimum. We display results over $\beta$ in Figure 2. Our bound gives a useful proxy for whether the BM factorization converges to the global minimum. For larger values of $\beta$, it becomes much easier for GD to find an optimal solution, but GD almost never finds the global minimum.
We find that GD applied to the BM factorization finds saddle points that are not quite local minima, but close. Interestingly, there is only a minor relationship between the optimality gap and the rank of the BM factorization $m$. This experiment further validates the need to consider stationary points of the BM factorization, rather than just local minima.
### 4.2 BM Enables Layerwise Training of Convex CNNs
We consider the task of leveraging the theory of two-layer convex ReLU neural networks for training deep image classifiers. Following the approach of Belilovsky et al. (2019), we seek to train two-layer convex CNNs greedily to mimic the performance of a deep network. In the non-convex setting, the greedy approach proceeds by training a single two-layer CNN, then freezing the weights of this CNN, using the latent representation of this CNN as the features for another two-layer CNN, and repeating this process for a specified number of stages. We leverage the result of Theorem 3.6 to convert this non-convex layerwise training procedure to a convex one, training stages of convex two-layer gated ReLU CNNs. We apply this procedure to the CIFAR-10 (Krizhevsky et al., 2009) and Fashion-MNIST (Xiao et al., 2017) datasets, using the architecture of Belilovsky et al. (2019) (see Appendix C.2).
In a memory-limited setting, layerwise training with the convex model equation 23 is impossible, because the latent representation to be used as input for the second stage, given by $\{D_j^{(i)}X_jZ_j(c')\}_{j=1}^P_{c'=1}$, has $Pac$ channels, which for reasonable choices of $P = 256$, $a = 4$, $c = 10$ yields upwards of $10^4$ channels for the input to the second CNN stage. Accordingly, we employ the BM factorization of size $m$, so the latent representation only consists of $mP$ channels.
Figure 1 demonstrates that this BM scheme for layerwise training allows for performance to improve one stage the next of the layerwise training procedure, reaching the performance of much deeper networks while enabling a convex optimization procedure. Training five stages of a BM factorized convex two-layer gated ReLU CNN on CIFAR-10 resulted in a final test accuracy of 80.9%, 81.1%, and 81.5% for $m \in [1, 2, 4]$ respectively. Previously, it has been demonstrated that a six-layer ReLU CNN achieves 81.6% on CIFAR-10 when trained end-to-end (Kılıçarslan & Celik, 2021). Interestingly, we find that increasing $m$ generally improves performance. The three-stage trained BM factorized convex two-layer gated ReLU CNN on Fasion-MNIST achieved a final test accuracy of 91.5%, 91.4%, and 91.6% for $m \in [1, 2, 4]$ respectively, compared to 91.2% for a four-layer ReLU CNN trained end-to-end (Bhatnagar et al., 2017). On this dataset, we observe that the impact of increasing $m$ on performance is less pronounced than for CIFAR-10.
The BM factorization is essential for convex neural networks to match deep ReLU networks. Without the BM factorization, the induced regularizer of convex CNNs is intractable to compute, and the latent representation used for layerwise learning is prohibitively large. While inheriting the guarantees of Theorems 3.3, 3.4, and 3.5 these layerwise trained BM networks match the performance of end-to-end, highly non-convex ReLU deep networks.
REFERENCES
Erling D Andersen and Knud D Andersen. The mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm. In *High performance optimization*, pp. 197–232. Springer, 2000.
Sanjeev Arora, Simon Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In *International Conference on Machine Learning*, pp. 322–332. PMLR, 2019.
Francis Bach, Julien Mairal, and Jean Ponce. Convex sparse matrix factorizations. *arXiv preprint arXiv:0812.1869*, 2008.
Yatong Bai, Tanmay Gautam, and Somayeh Sojoudi. Efficient global optimization of two-layer relu networks: Quadratic-time algorithms and adversarial training. *arXiv preprint arXiv:2201.01965*, 2022.
Burak Bartan and Mert Pilanci. Neural spectrahedra and semidefinite lifts: Global convex optimization of polynomial activation neural networks in fully polynomial-time. *arXiv preprint arXiv:2101.02429*, 2021.
Eugene Belilovsky, Michael Eickenberg, and Edouard Oyallon. Greedy layerwise learning can scale to imagenet. In *International conference on machine learning*, pp. 583–593. PMLR, 2019.
Shobhit Bhatnagar, Deepanway Ghosal, and Maheshkumar H Kolekar. Classification of fashion article images using convolutional neural networks. In *2017 Fourth International Conference on Image Information Processing (ICIIP)*, pp. 1–6. IEEE, 2017.
Nicolas Boumal, Vlad Voroninski, and Afonso Bandeira. The non-convex burer-monteiro approach works on smooth semidefinite programs. *Advances in Neural Information Processing Systems*, 29, 2016.
Nicolas Boumal, Vladislav Voroninski, and Afonso S Bandeira. Deterministic guarantees for burer-monteiro factorizations of smooth semidefinite programs. *Communications on Pure and Applied Mathematics*, 73(3):581–608, 2020.
Samuel Burer and Renato DC Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. *Mathematical Programming*, 95(2):329–357, 2003.
Samuel Burer and Renato DC Monteiro. Local minima and convergence in low-rank semidefinite programming. *Mathematical programming*, 103(3):427–444, 2005.
Ricardo Cabral, Fernando De la Torre, João P Costeira, and Alexandre Bernardino. Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition. In *Proceedings of the IEEE international conference on computer vision*, pp. 2488–2495, 2013.
Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. *Foundations of Computational mathematics*, 9(6):717–772, 2009.
Emmanuel J Candès and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. *IEEE Transactions on Information Theory*, 56(5):2053–2080, 2010.
Diego Cifuentes and Ankur Moitra. Polynomial time guarantees for the burer-monteiro method. *arXiv preprint arXiv:1912.01745*, 2019.
Hadi Daneshmand, Jonas Kohler, Aurelien Lucchi, and Thomas Hofmann. Escaping saddles with stochastic gradients. In *International Conference on Machine Learning*, pp. 1155–1164. PMLR, 2018.
Yash Deshpande, Andrea Montanari, and Emile Richard. Cone-constrained principal component analysis. *Advances in Neural Information Processing Systems*, 27, 2014.
Steven Diamond and Stephen Boyd. Cvxpy: A python-embedded modeling language for convex optimization. *The Journal of Machine Learning Research*, 17(1):2909–2913, 2016.
|
yKC6Jd0CsP
|
Vision ELECTRA: Adversarial Masked Image Modeling with Hierarchical Discriminator
Anonymous authors
Paper under double-blind review
Abstract
As a practical pre-training strategy for natural language processing (NLP), ELECTRA first masks parts of input texts and trains a generator and discriminator to reconstruct the texts and identify which parts are original or replaced. In this work, we propose Vision ELECTRA, namely \( \mathcal{V}E \), which migrates ELECTRA to the vision domain with a non-trivial extension. Like ELECTRA, \( \mathcal{V}E \) first leverages MAE or SimMIM to reconstruct images from masked image patches by generation. Particularly, random Gaussian noise is induced into the latent space of the generator to enhance the diversity of generated patches, in an adversarial autoencoding manner. Later, given original images and the reconstructed ones, \( \mathcal{V}E \) trains an image encoder (usually ViT or Swin) via a hierarchical discrimination loss, where the discriminator is expected to (1) differentiate between original images and the reconstructed ones and (2) differentiate between original patches and generated ones. It gives \( \mathcal{V}E \) a unique advantage that learns contextual representations characterizing images in both macro- and micro-levels (i.e., the entire image and individual patches). Extensive experiments have been carried out to evaluate \( \mathcal{V}E \) with baselines under fair comparisons. The findings demonstrate that \( \mathcal{V}E \) based on the ViT-B attains a top-1 acc of 83.43% on the ImageNet-1K image classification task with a 1.17% improvement over baselines under continual pre-training. When transferring \( \mathcal{V}E \) pre-trained models to other CV tasks, including segmentation and detection, our method surpasses other methods, demonstrating its applicability on various tasks.
1 Introduction
Self-supervised pre-training strategies surge nowadays, resulting in powerful pre-trained models, such as BERT (Kenton & Toutanova, 2019), GPT (Radford et al., 2018), and MAE (He et al., 2022), for various tasks. Among these strategies, masked autoencoding strategies have been widely adopted by numerous solutions, including the masked language model (MLM) (Salazar et al., 2020) for natural language processing (NLP) (Strubell et al., 2019) and the masked image model (MIM) (Xie et al., 2022) for computer vision (CV). In general, these strategies first mask part of input images/texts, then learn to generate the masked ones in the context of masking and reconstruct the images/texts. To further improve masked autoencoding for NLP, ELECTRA (Clark et al., 2020) has been proposed to follow up the MLM with a discriminator, where the MLM and discriminator are jointly trained to reconstruct the texts and identify which parts of texts are original or replaced. In contrast to the vanilla MLM, ELECTRA outputs the text encoder of discriminator as the outcome of self-supervised pre-training. Earlier studies (Clark et al., 2020) show that such text encoder of discriminator could outperform BERT in learning contextual representation of texts.
Encouraged by the success of ELECTRA, efforts have been done to enhance masked image models for CV. For example, (He et al., 2022) proposed Masked Autoencoder (MAE) that trains vision transformers to reconstruct images using part of image patches, where the encoder of network is adopted as a scalable vision learner by self-supervision. Further, to lower the training cost of MIM, Xie et al. (2022) introduces SimMIM that incorporates random masking on image patches and raw pixel regression loss with light-weight prediction heads. More recently, (Fei et al., 2023) studies to incorporate MAE within the training framework of generative adversarial networks (GANs), where a discriminator is introduced to replace the loss of pixel-wise regression for the image reconstruction task. Though these works have gathered the necessary ingredients, such as masking strategies,
autoencoders, reconstruction losses, and even discriminators to identify original/reconstructed images, they all fail to capture the key concept of ELECTRA for computer vision tasks—leveraging the encoder of discriminator rather than that of autoencoders as the outcome of pre-training. The non-trivial design of ELECTRA’s discriminative task for computer vision is that the model learns from all input tokens, rather than just a small masked-out subset, granting an appreciable depth to image comprehension (Clark et al., 2020). It is thus reasonable to replicate ELECTRA for potential performance enhancement in self-supervised pre-training of images.
In this work, we aspire to extend the ELECTRA model to the field of computer vision through substantial enhancements, culminating in the proposal of Vision ELECTRA, colloquially referred to as \( \mathcal{VE} \). Mirroring the operational framework of ELECTRA, \( \mathcal{VE} \) initiates its process by employing either MAE or SimMIM to regenerate images from masked image patches via generation. Specifically, random Gaussian noise is injected into the latent space of the generator to diversify the assortment of created patches while adhering to the principles of adversarial autoencoding. Subsequently, \( \mathcal{VE} \) implements an image encoder, typically ViT or Swin, as the image discriminator employing a hierarchical discrimination loss. Within the joint training procedure of generator and discriminator for \( \mathcal{VE} \), the discriminator juggles two key responsibilities. Firstly, it distinguishes between the original images and their reconstructed counterparts. Secondly, it discerns between the original patches and those that have been generated. The use of hierarchical discrimination loss accords \( \mathcal{VE} \) a distinct advantage by imparting the ability to learn contextual representations that characterize images at both macro- and micro-levels. In other words, it can understand both the overall image and its individual patches, granting an appreciable depth to image comprehension.
The main contributions are as follows: (1) We propose a novel MIM framework \( \mathcal{VE} \), following core concepts of ELECTRA that adopt the generator-discriminator paradigm for CV and leverage the encoder of discriminator as the pre-training outcome. (2) The proposed \( \mathcal{VE} \) incorporates three innovative designs: i) adversarial pre-training of the generator to enhance “image authenticity”, ii) incorporation of Gaussian noise to perturb the latent space and thus diversify the reconstructed images, iii) introduction of the hierarchical discriminator to capture contextual representations of images at both macro- and micro-levels. (3) We perform extensive experiments demonstrating remarkable performance superiority compared to mainstream MIM methods in downstream tasks.
2 RELATED WORK
Mask Image Modeling Self-supervised learning, widely used in NLP tasks (Brown et al., 2020; Kenton & Toutanova, 2019), has found success with the adoption of pixel sequences for prediction (iGPT) (Chen et al., 2020) and masked token prediction for self-supervised pre-training (ViT) (Dosovitskiy et al., 2020). Following these advancements, Transformer-based architectures have emerged in Masked Image Modeling (MIM) (Bao et al., 2021; Feichtenhofer et al., 2022; He et al., 2022; Xie et al., 2022; Wang et al., 2022; Wei et al., 2022). MIM models predict masked content within visible regions, enriching visual interpretations. BERT (Bao et al., 2021) has enhanced this area by learning via discrete token prediction. MAE (He et al., 2022) and SimMIM (Xie et al., 2022), meanwhile, favor pixel-wise masking and reconstruction, eliminating the need for discrete token representations.
Generative Adversarial Networks Goodfellow et al. (2014) have proven effective in generating high-quality artificial data. The practice of using the discriminator of GAN in subsequent operations is akin to our approach and was introduced by Radford et al. (2015). Similarly, MaskGAN (Fedorus et al., 2018) trains its generator to fill in removed tokens, a concept paralleling MIM. More recently, Fei et al. (2023) proposed to incorporate GANs in the MAE framework to replace the loss of pixel-wise regression for the enhanced image reconstruction task.
ELECTRA Clark et al. (2020) is a two-part model with a generator and a discriminator, both based on BERT (Kenton & Toutanova, 2019). The generator uses MLM to find replacements for a MASK token and the discriminator detects these replacements in the text. After pre-training, the generator is discarded, making the discriminator a pre-trained language model. ELECTRA either outperforms BERT with the same computing power or performs similarly.
Discussion Our work presents several unique contributions compared to previous studies. In terms of novelty, this work is the first to migrate ELECTRA (Clark et al., 2020) into CV. Non-trivial extensions, such as adversarial pre-training of generator with Gaussian noises in the latent space, alongside a hierarchical discrimination loss for representation learning at both macro- and micro-levels,
allow \( \mathcal{VE} \) to outstrip a straight re-implementation of ELECTRA for CV. Unlike Fei et al. (2023), who also trains an MAE generator model with a discriminator via the GAN-like loss, \( \mathcal{VE} \) employs the encoder of the discriminator rather than the MAE as the outcome of pre-training, following the core design of ELECTRA, where the discriminative task can allow the model learns from all input tokens, rather than just a small masked-out subset, making it grant an appreciable depth to image comprehension. Further, \( \mathcal{VE} \) improves upon simple GAN Loss with hierarchical discrimination, enhancing the generator/discriminator’s ability to reconstruct and identify patches/images. All these innovative designs make \( \mathcal{VE} \) a novel and effective framework for pre-training on top MIM.
3 METHODOLOGY
In this section, we first present the overall design of \( \mathcal{VE} \), then introduce the key algorithm designs for training the generator and discriminator within \( \mathcal{VE} \).
3.1 OVERALL DESIGN OF VISION ELECTRA
As shown in Figure 1, \( \mathcal{VE} \) is built upon a generator-discriminator paradigm, where the image encoder of the discriminator is the outcome of pre-training. In our study, the generator \( \mathcal{G} \) is a masked image modeling (MIM) model (e.g. MAE (He et al., 2022), SimMIM (Xie et al., 2022)), which masks a portion of input images and predicts the masked patches. In the meanwhile, the discriminator \( \mathcal{D} \) is a vision transformer (e.g. ViT (Dosovitskiy et al., 2020), Swin-Transformer (Liu et al., 2021)), which classifies each image patch or the entire image is original or reconstructed.
For the \( \mathcal{G} \), it takes the visible image patches as input, obtaining the latent codes and then reconstruct the masked patches from the latent codes by a small network. Different from MAE, \( \mathcal{G} \) introduces a Gaussian noise vector to perturb the latent space, mildly inhibiting the capabilities while enhancing feature diversity (Tian et al., 2020), thus strengthening the \( \mathcal{D} \) in an adversarial autoencoding manner.
For the \( \mathcal{D} \), it serves a dual role: distinguishing between original and reconstructed patches / images. These tasks share the backbone weights, with different task-specific heads. In alignment with established strategies (He et al., 2022; Xie et al., 2022), \( \mathcal{D} \) similarly process the patch tokens through a sequence of Transformer blocks. Patch discrimination is facilitated by a CNN-based head, discerning tokens in the data from those replaced by \( \mathcal{G} \). For image discrimination, a linear projection layer is utilized as the image head, determining whether the image is original or reconstructed.
Given an image \( I \), we first split it into non-overlapping patches and add a CLS token \( x_0 \), i.e., \( I = \{x_0, x_1, \cdots, x_N\} \), then we randomly mask some patches with a probability, for example 0.75 in MAE. After that, we use \( \mathcal{G} \) to reconstruct the masked patches, and finally, we feed the reconstructed and the original images to \( \mathcal{D} \) for discrimination (see Figure 1).
3.2 GENERATOR VIA ADVERSARIAL MASKED IMAGE MODELING
\( \mathcal{G} \) reconstructs images from masked image patches using a MIM framework.
Patch Generation by Pixel-wise Regression The output of \( \mathcal{G} \) is reshaped to form the reconstructed image. Similar to the previous MIM framework (He et al., 2022; Xie et al., 2022), the loss function used in our approach computes the MSE between the reconstructed and original images in the pixel space. To calculate the loss, we only consider the masked patches, following a similar
Figure 2: An Illustration of Hierarchical Discrimination. At the micro-level, the adversarial training will progressively make the reconstructed patches indistinguishable from patch head. At the macro-level, the image head introduces another discriminative constraint for the pipeline.
approach as MAE. The definition of our loss function (patch_loss) is as follows.
$$L_{\text{Img}}(\theta_G) = \frac{1}{M} \sum_{i=1}^{M} (x_i^m - x_i^r)^2,$$
where $x^r$ represents the patches recovered by $G$’s decoder using the latent code $z$, i.e., $x^r = f(z + \tilde{z})$, $\tilde{z}$ represents the random Gaussian noise vector, $x^m$ represents the masked patches of original image $x$ and $M$ represents the number of patches.
Image Reconstruction in Adversarial Manner It is easy to discriminate the reconstructed image from the original one for the discriminator, if the reconstructed patches solely rely on the patch loss, since the reconstructed patches are normally blurry. Therefore, the simple task cannot benefit the discriminator to capture useful information for downstream tasks. To address this issue, we introduce an adversarial loss (Goodfellow et al., 2014) to enhance the authenticity of the generated images. The reconstructed images fool the discriminator in two levels – macro level and micro level, i.e, the discriminator should treat the entire reconstructed image and each reconstructed patch as real ones, resulting in an adversarial loss as follows:
$$L_{\text{GAN}}(\theta_G) = -\frac{1}{M} \sum_{i=1}^{M} \log(D(x_i^r)) - \log(D(x_0^r))$$
where $x_i^r$ denotes the $i$th reconstructed patch and $x_0^r$ denotes the CLS token, the representation of which denotes the entire reconstructed image, and the input entire image is composed of the reconstructed and original unmasked patches. $D(x_i^r)$ denotes the predicted label of the $i$th patch by the discriminator $D$, likewise, $D(x_0^r)$ is the prediction of the entire reconstructed image. $M$ denotes the number of reconstructed patches in an image.
3.3 DISCRIMINATOR VIA HIERARCHICAL DISCRIMINATION LOSS
$D$ is responsible for distinguishing between original and reconstructed inputs. It achieves this task by employing a weight-shared backbone and two sub-task heads.
Micro-Level Discrimination We consider the patch head as the micro-level discrimination, which classifies whether a patch is original or reconstructed. Therefore, we use a binary cross-entropy loss, i.e.,
$$L_{\text{Patch}}(\theta_D) = -\frac{1}{N} \sum_{i=1}^{N} [y_i \log(D(x_i)) + (1 - y_i) \log(1 - D(x_i))]$$
where $x_i$ is an image patch and $y_i$ is the corresponding label. For the original patch in an image, $y_i = 1$, and $y_i = 0$ for a reconstructed patch. $N$ represents the number of patches in image $I$. Note that $I$ can be the reconstructed or original image.
Macro-Level Discrimination The macro-level discrimination is to classify whether the entire image is original or not. Similar to micro-level discrimination, we also use the binary cross-entropy loss. The difference is that we use the representation of CLS token to compute the loss function, i.e.,
$$L_{\text{CLS}}(\theta_D) = -y \log(D(x_0)) - (1 - y) \log(1 - D(x_0))$$
where $y = 0$ for a reconstructed image and $y = 1$ for the original image.
Figure 3: Illustration of Reconstructed Images with Two Pre-training Schemes.
| Method | Pre-training strategies | Input size | Mask ratio | Top-1 acc (%) |
|--------|-------------------------|------------|------------|---------------|
| SimMIM | 100ep from Scratch | $224^2$ | 60% | 81.39 |
| MAE | 100ep from Scratch | $224^2$ | 75% | 81.68 |
| $\mathcal{V}_E$ | 100ep from Scratch | $224^2$ | 75% | 80.73 |
| SimMIM | Official | $224^2$ | 60% | 82.16 |
| MAE | Official | $224^2$ | 60% | 82.89 |
| SimMIM | Official + 50ep | $224^2$ | 60% | 82.26 |
| MAE | Official +50ep | $224^2$ | 75% | 83.06 |
| $\mathcal{V}_E$ | Official +50ep | $224^2$ | 75% | **83.43** |
Table 1: Performance Comparisons: ’100ep’ means pre-training from scratch for 100 epochs, ’Official’ refers to fine-tuning based on the official release of pre-trained models, ’+50ep’ refers to an additional 50 epochs pre-training based on the official releases (He et al., 2022; Xie et al., 2022).
3.4 JOINT PRE-TRAINING OF GENERATOR AND DISCRIMINATOR
We use the following loss to jointly train the generator and discriminator,
$$\mathcal{L} = \min_{\theta_G, \theta_D} \mathcal{L}_{Img}(\theta_G) + \lambda \mathcal{L}_{GAN}(\theta_G) + \mathcal{L}_{Patch}(\theta_D) + \mathcal{L}_{CLS}(\theta_D)$$
(5)
To stabilize the training process, we use a small $\lambda$ in our experiments, i.e, $\lambda = 0.2$.
Note that all tokens are visible to the discriminator, which is the same as downstream tasks, narrowing the gap between pre-training and fine-tuning.
4 EXPERIMENT
We employ the MAE (He et al., 2022) as the $G$ within the $\mathcal{V}_E$. For the $D$ component of $\mathcal{V}_E$, we utilize the ViT-B/16 (Dosovitskiy et al., 2020). Our experimental configuration entails self-supervised pre-training using the ImageNet-1K (Deng et al., 2009). Subsequently, we also have fine-tuned downstream tasks such as classification (Lu & Weng, 2007), segmentation (Guo et al., 2018), and detection (Zou et al., 2023). More experimental settings are provided in the supplementary material.
4.1 COMPARISONS WITH PREVIOUS RESULTS ON IMAGE CLASSIFICATION
For fair comparison, following previous works (He et al., 2022; Xie et al., 2022), we conduct experiments using ViT-B. The quantitative results are shown in Table 1, and the qualitative results are exhibited in Figure 3. Specifically, two sets of comparisons are as follows.
Pre-training from Scratch We train $\mathcal{V}_E$ using 100 epochs from its random scratch. For SimMIM (Xie et al., 2022) and MAE (He et al., 2022), we also perform the same 100 epochs pre-training.
Figure 4: Examples of Reconstructed Images. (a) Sharing Weight: reconstructed images at varying training steps, (b) Smaller Generator: reconstructed images with a smaller generator.
| Settings | Pre-training strategies | Input size | Top-1 acc (%) |
|---------------------------|-------------------------|------------|---------------|
| VE (Sharing Weight) | Official+50ep | 224² | 82.37 |
| VE (Smaller Generator) | Official+50ep | 224² | 81.93 |
| VE | Official+50ep | 224² | **83.43** |
Table 2: Experiment Results on Model Exploration.
from scratch with the official configuration. We then finetune these models using additional 100 epochs and compare their performance. As shown in Table 1, our method obtains competitive results compared to the mainstream MIM methods (VE: 80.73 vs. SimMIM: 81.39 vs. MAE: 81.68).
We believe the VE performs marginally worse than baselines due to the following two reasons. (1) VE leverages an adversarial loss derived from GAN to ensure the quality of image reconstruction while the GAN training sometimes is difficult to converge and frequently leads to model collapse (Salimans et al., 2016). (2) The discriminator in VE could not learn good representations from distinguishing between original images and low-quality generated ones (please refer to the examples in Fig. 3), while high-quality image generation usually needs more efforts to train (especially compared to the cost of language generation tasks in vanilla ELECTRA). Note that we use 100 training epochs here to follow the settings of vanilla ELECTRA (Clark et al., 2020).
Continual Pre-training In addition to training from scratch, we adopt pre-trained models from their official releases and conduct experiments through continual-pre-training these models. Specifically, here, we first build-up a VE model using ViT-B (as the discriminator D) pre-trained by SimMIM, and continue to train VE using additional 50 epochs. We then compare the discriminator of such VE model with SimMIM and MAE under the same continual pre-training settings, where both SimMIM and MAE were firstly loaded from official release and further trained with additional 50 epochs. As shown in Table 1, while both SimMIM and MAE could be improved by continual pre-training, our proposed method still outperforms these models and achieves a Top-1 acc of 83.43%. Compared to MAE (Official+50ep), VE exhibits an improvement of 0.37 points. Compared to the SimMIM (Official+50ep), VE demonstrates a superior performance, surpassing it by 1.17 points. Note that, in this setting, the discriminator of VE was derived from ViT-B pre-trained by SiMIM. Our method demonstrates absolute performance improvements with 50 additional epochs of continual-pre-training, surpassing both SimMIM (Official) and SimMIM (Official+50ep). Furthermore, as illustrated in Figure 3, it is evident that the presence of authentic reconstructed images obviously enhances the performance of the hierarchical discrimination across both macro- and micro-levels.
4.2 Empirical Studies on Model Exploration
Inspired by the Model Extension section of ELECTRA (Section 3.2 of Clark et al., 2020), we discuss several options of VE. Particularly, we focus on two settings: (a) Sharing Weight and (b) Smaller Generator, here.
Sharing Weight As was mentioned, both generator G and discriminator D in VE are derived from the same model, i.e., ViT-B as their encoders. It is reasonable to assume sharing weights between these two encoders might be able to improve the performance under the same training budget. As shown in Table 2, the performance of VE under the sharing weight setting lags behind the vanilla VE (82.37 vs. 83.43). We consider the degradation is due to the disparity between task domains of generator and discriminator–sharing weights between these two would lead to instability during
pre-training. Figure 4(a) shows that with increasing training steps, the generator under sharing weight setting produces images of varying quality. Some are with high-quality, while others are notably inferior. This variability underscores our previous assumption that weight sharing leads to instability during pre-training. Hereby, we refrain from the sharing weight strategy in $\mathcal{VE}$.
**Smaller Generator** As the pre-training outcome of $\mathcal{VE}$ is the encoder of discriminator, it is reasonable to doubt the use of smaller generator (with fewer parameters) could still achieve good performance. To be specific, compared to ViT-B used by the vanilla $\mathcal{VE}$, we employ ViT-S as the encoder for the generator to build up $\mathcal{VE}$ (Smaller Generator). Specifically, we first pre-train ViT-S within MAE using 200 epochs and use such ViT-S model as the encoder of generator in $\mathcal{VE}$ (Smaller Generator). We still adopt ViT-B pre-trained by SimMIM as the encoder of discriminator in $\mathcal{VE}$ (Smaller Generator) and perform an additional 50 epochs of pre-training. In Table 2, it is evident that $\mathcal{VE}$ (Smaller Generator) performs worse than vanilla $\mathcal{VE}$ in terms of Top-1 acc (81.93 vs. 83.43). The performance degradation is due to the low-quality of images generated by the smaller generator (shown in Figure 4(b)), when employing ViT-S as the encoder of generator.
### 4.3 Ablation Study
The ablation study is conducted to validate the effectiveness of individual components within our $\mathcal{VE}$. All experiments in this section are pre-trained on the ImageNet-1K dataset (Deng et al., 2009) and subsequently fine-tuned on the Image-1K image classification task.
**Patch Loss** As presented in Table 3, it is evident that by excluding the patch loss, the Top-1 acc of $\mathcal{VE}$, fine-tuned for the ImageNet-1K image classification task, undergoes a marginal reduction from 83.43 to 83.26. This degradation is probably due to the ignorance to the patch-level discrimination, as the discriminator without patch loss solely appraises the overall image authenticity (original or reconstructed). As a result, $\mathcal{VE}$ w/o the patch loss would fail to train the encoder of discriminator with ability of representation at micro-level.
**GAN Loss** As shown in Table 3, it shows that by excluding the GAN loss, the Top-1 acc of $\mathcal{VE}$, fine-tuned for the ImageNet-1K image classification task, experiences a slight decrement, shifting from 83.43 to 83.24. Additionally, the qualitative results, presented in Figure 5, demonstrate the images reconstructed by $\mathcal{VE}$ and $\mathcal{VE}$ w/o GAN Loss. It is obvious that the GAN Loss can help $\mathcal{VE}$ to generate images with higher authenticity. As a result, $\mathcal{VE}$ with the GAN Loss could train the encoder of discriminator with better capacity of feature learning at macro-level.
Yet, ablation studies on the patch and GAN losses have already proved the effectiveness of our proposed hierarchical discrimination in both image generation and contextual representation at both micro/macro-levels. The joint training of generator and discriminator in adversarial settings could be benefited from both losses.
**CLS Loss** As depicted in Table 3, it is evident that the inclusion of the CLS Loss yields a substantial performance enhancement, notably increasing the Top-1 acc from 83.01 to 83.43. This
Figure 6: Visualization of Self-attention from Pre-trained Models. We have visualized self-attention maps on the two heads separately. The figures of self-attention head-9 demonstrate micro-level representation ability, while self-attention head-6 exhibits macro-level representation ability.
| Methods | Input size | Pre-training strategies | Patch loss | GAN loss | CLS loss | Gaussian noise | Top-1 acc (%) |
|---------|------------|------------------------|------------|----------|----------|----------------|---------------|
| SimMIM | $224^2$ | Official+50ep | | | | | 82.26 |
| $\mathcal{V}^\mathcal{E}$ | $224^2$ | Official+50ep | ✓ | ✓ | ✓ | | 83.26 |
| $\mathcal{V}^\mathcal{E}$ | $224^2$ | Official+50ep | ✓ | ✓ | ✓ | | 83.24 |
| $\mathcal{V}^\mathcal{E}$ | $224^2$ | Official+50ep | ✓ | ✓ | ✓ | | 83.01 |
| $\mathcal{V}^\mathcal{E}$ | $224^2$ | Official+50ep | ✓ | ✓ | ✓ | | 83.33 |
| $\mathcal{V}^\mathcal{E}$ | $224^2$ | Official+50ep | ✓ | ✓ | ✓ | ✓ | **83.43** |
Table 3: Ablation Study. For fair comparison, we use the ViT-B pre-trained using above methods under the same setting and finetuned on the downstream task for experiments.
Improvement can be attributed to the integration of the CLS token, which serves as an image head for pre-training the discrimination between original and reconstructed images. This feature proves highly beneficial for the subsequent fine-tuning in image classification. The positive impact the CLS token is also illustrated in the qualitative results presented in Figure 6. Additionally, drawing inspiration from Dino (Caron et al., 2021), we also visualize the attention heat maps for the final layer of ViT-B. Figure 6 clearly demonstrates that, following the incorporation of CLS Loss and CLS token, $\mathcal{V}^\mathcal{E}$ exhibits a strengthened focus on the primary subject within the image.
Gaussian Noise In Figure 7, we highlight the differences among several kinds of images: the original image, the diverse images generated by $\mathcal{V}^\mathcal{E}$ with and without Gaussian noise. We can see that using Gaussian noise is able to generate an image that is more different but still real. To some extent, adding Gaussian noise to the latent code improves the diversity of the generated images, which plays the role of data augmentation for the discriminator, hence, it can benefit pre-training. Looking at the quantitative results in Table 3, the Top-1 accuracy increases from 83.33 to 83.43 by using additional Gaussian noise.
Discriminative Pre-training Strategy Figure 6 reveals that, in contrast to SimMIM, which pays much attention to the entire image instead of objects, i.e., self-attention heads (head-6 and head-9) exhibit similar macro-level attention, the $\mathcal{V}^\mathcal{E}$ is able to demonstrate both macro- and micro-level attention, e.g., head-9 focuses on objects, while head-6 focuses on the whole image. This qualitative observation demonstrates that, unlike prevalent MIM methods like MAE and SimMIM, the pre-trained discriminator of $\mathcal{V}^\mathcal{E}$ obtains smaller task domain gap between the pre-trained tasks and the downstream tasks (Xie et al., 2023), facilitating more effective fine-tuning and improving the performance (quantitative results as shown in Table 1).
4.4 Cross-Task Transfer Learning Experiments
Semantic segmentation To maintain a fair comparison, we fine-tune the pre-trained models provided by MAE and SimMIM under the same configuration. Table 3(a) shows that our framework significantly improves the performance of semantic segmentation compared to the mainstream methods, e.g., by 2.06 points for SimMIM and 1.02 points for MAE. The reasons are two-fold. First, the
Figure 7: Examples of Recovered Images with and w/o Gaussian Noise within the $\mathcal{V}_E$.
| Method | Pre-training strategies | Input size | mIoU |
|--------|-------------------------|------------|------|
| SimMIM | Official+50ep | $512^2$ | 46.76|
| MAE | Official+50ep | $512^2$ | 47.80|
| $\mathcal{V}_E$ | Official+50ep | $512^2$ | **48.82** |
(a) ADE20K Semantic Segmentation
| Method | Pre-training strategies | Input size | AP$_{box}$ |
|--------|-------------------------|------------|------------|
| SimMIM | Official+50ep | $768^2$ | 45.95 |
| MAE | Official+50ep | $768^2$ | 46.10 |
| $\mathcal{V}_E$ | Official+50ep | $768^2$ | **46.60** |
(b) COCO Object Detection
Table 4: Cross-Task Transfer Learning Experiments: (a) ADE20K semantic segmentation using UperNet. The reproduction code is from mae-segmentation (Li, 2022). (b) COCO object detection using a ViT Mask R-CNN baseline. The reproduction code is from MIMDet (Fang et al., 2022).
The discriminator treats all patches as visible ones in the pre-training phase, which is the same as the fine-tuning phase. Second, during pre-training we use a patch-level classification task, to some extent, which is similar to pixel-level classification, benefiting the task of semantic segmentation.
Object detection Also, we fine-tune the pre-trained models provided by MAE and SimMIM under the same configurations for the task of object detection. As shown in Table 4(b), compared to mainstream methods, our $\mathcal{V}_E$ performs better under the same configuration. The metric score obtained by our $\mathcal{V}_E$ is 0.5 points higher than MAE (46.60 vs. 46.10, AP$_{box}$). Additionally, our $\mathcal{V}_E$ also outperforms another counterpart – SimMIM by 0.65 points (46.60 vs. 45.95). The reason for the improvement is that our pre-trained model achieves a better ability to localize objects, i.e., $\mathcal{V}_E$ can pay attention to both the primary object and the entire scene, while the ViT pre-trained by SimMIM shows global attention (see Fig. 6).
4.5 Remarks on Experiment Results
Here, we summarize above experiment results and make three conclusions as follows. (1) $\mathcal{V}_E$ is effective in representation learning, and it can outperform SimMIM and MAE in continual pre-training settings with the same amount of training epochs (i.e., Official+50ep introduced in Section 4.1). (2) The joint training procedure employed $\mathcal{V}_E$ trains both generator and discriminator simultaneously and adversarially – The higher the quality of images reconstructed by the generator, the more effectively the encoders are trained in the discriminator. (3) $\mathcal{V}_E$ works well on various CV tasks, including classification, segmentation and detection, while every component proposed in $\mathcal{V}_E$ has a specific role contributing to the overall functionality.
5 Conclusion
In this work, we propose a novel MIM framework $\mathcal{V}_E$, following the core concepts of ELECTRA that adopt the generator-discriminator paradigm for CV and leverage the encoder of discriminator as the pre-training outcome. To achieve the goal, several non-trivial technical contributions have been made, including adversarial pre-training of the generator to enhance image authenticity, infusion of Gaussian noise in the latent space for image diversity, and a hierarchical discrimination loss that enables representation at both macro/micro-levels. Extensive experiments have been carried out to demonstrate the performance advancements of $\mathcal{V}_E$ for downstream tasks. Our method, VE, exhibits superiority in performance over mainstream Mask Image Modeling (MIM) methods, including SimMIM and MAE, under fair comparisons. Empirical studies and ablation studies have explored the dimensions of our framework design and prove the soundness of every component proposed in our framework. Cross-task transfer learning experiments further confirm the applicability of $\mathcal{V}_E$ on various CV applications including segmentation and detection. Being a first-of-its-kind initiative to leverage ELECTRA in CV, this work pioneers a unique blend of NLP techniques and CV models. Future works aim to expand on practical applications of this method and enhance the efficiency of pre-training further.
REFERENCES
Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. *arXiv preprint arXiv:2106.08254*, 2021.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. *Advances in neural information processing systems*, 33:1877–1901, 2020.
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In *Proceedings of the IEEE/CVF international conference on computer vision*, pp. 9650–9660, 2021.
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In *International conference on machine learning*, pp. 1691–1703. PMLR, 2020.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. Electra: Pre-training text encoders as discriminators rather than generators. In *International Conference on Learning Representations*, 2020. URL https://openreview.net/forum?id=rlxMHlBtvB.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. *arXiv preprint arXiv:2010.11929*, 2020.
Yuxin Fang, Shusheng Yang, Shijie Wang, Yixiao Ge, Ying Shan, and Xinggang Wang. Unleashing vanilla vision transformer with masked image modeling for object detection. *arXiv preprint arXiv:2204.02964*, 2022.
William Fedus, Ian Goodfellow, and Andrew M Dai. Maskgan: better text generation via filling in the _. *arXiv preprint arXiv:1801.07736*, 2018.
Zhengcong Fei, Mingyuan Fan, Li Zhu, Junshi Huang, Xiaoming Wei, and Xiaolin Wei. Masked auto-encoders meet generative adversarial networks and beyond. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 24449–24459, 2023.
Christoph Feichtenhofer, Yanghao Li, Kaiming He, et al. Masked autoencoders as spatiotemporal learners. *Advances in neural information processing systems*, 35:35946–35958, 2022.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. *Advances in neural information processing systems*, 27, 2014.
Yanming Guo, Yu Liu, Theodoros Georgiou, and Michael S Lew. A review of semantic segmentation using deep neural networks. *International journal of multimedia information retrieval*, 7:87–93, 2018.
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 16000–16009, 2022.
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *Proceedings of NAACL-HLT*, pp. 4171–4186, 2019.
Xiang Li. Ade20k semantic segmentation with mae. https://github.com/implus/mae_segmentation, 2022.
|
YEhQs8POIo
| "Generating differentially private (DP) synthetic data that closely resembles the original private d(...TRUNCATED)
|
mZn2Xyh9Ec
| "FLASHATTENTION-2: FASTER ATTENTION WITH BETTER PARALLELISM AND WORK PARTITIONING\n\nTri Dao\\textsu(...TRUNCATED)
|
Wxyyc2vvGd
| "CL-CALIB: Enhancing Post-training Quantization Calibration through Contrastive Learning\n\nAnonymou(...TRUNCATED)
|
ZGBOfAQrMl
| "VIDEO SUPER-RESOLUTION TRANSFORMER WITH MASKED INTER&INTRA-FRAME ATTENTION\n\nAnonymous authors\nPa(...TRUNCATED)
|
SQGUDc9tC8
| "THE DEVIL IS IN THE NEURONS: INTERPRETING AND MITIGATING SOCIAL BIASES IN PRE-TRAINED LANGUAGE MODE(...TRUNCATED)
|
otoggKnn0A
| "FHA-Kitchens: A Novel Dataset for Fine-Grained Hand Action Recognition in Kitchen Scenes\n\nAnonymo(...TRUNCATED)
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 47