Dataset Viewer
Auto-converted to Parquet
text
string
source
string
Balanced Continual Learning in Plasticity and Stability for Recommender System OOO OOO OOO, OOOOOO OOO OOO, OOOOOO OOO OOO, OOO ABSTRACT Continual learning in recommender systems predominantly aims to enhance stability to prevent catastrophic forgetting, a goal similarly pursued in the realms of computer vision and natural language pro-cessing. However, approaches that solely emphasize stability fail to account for the dynamic nature of real-world environments where user preferences frequently change. Thus, it is imperative for con-tinual learning-based recommender systems to capture changes in user preferences and have the flexible ability to reflect new prefer-ences, called plasticity. In this paper, we present the BPS ( Balanced Continual Learning in Plasticity and Stability for Recommender System) framework, a novel method that effectively balances plas-ticity and stability in recommender systems. BPS addresses this issue by capturing and reflecting individual user preference changes throughout the continual learning process. Subsequently, it adjusts the replay sample size and regularization intensity to accommodate these changes, ensuring a personalized and adaptive learning pro-cess. Our experimental results confirm the superior performance of BPS over other baseline methods across three public datasets. Additionally, we demonstrate that BPS successfully maintains both plasticity andstability. CCS CONCEPTS Information systems Recommender systems ;Comput-ing methodologies Lifelong machine learning. KEYWORDS Recommender Systems, Continual Learning, Incremental Learning, Plasticity, Stability ACM Reference Format: OOO, OOO, and OOO. 2024. Balanced Continual Learning in Plasticity and Stability for Recommender System. In Proceedings of 33nd ACM International Conference on Information and Knowledge Management (CIKM '24). ACM, New York, NY, USA, 10 pages. https://doi. org/XXXXXXX. XXXXXXX Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm. org. CIKM '24, October 21-25, 2024, Boise, Idaho, USA ยฉ2024 Association for Computing Machinery. ACM ISBN 978-1-4503-XXXX-X/18/06... $15. 00 https://doi. org/XXXXXXX. XXXXXXX (a)Individual preference change t-1 t NN (b)Personalized continual learning Reg. Replay+ +JSDFigure 1: Overview of BPS. (a) BPS relies on changes in individual users' preferences over time as a critical factor. (b) By applying this factor to personalized adaptive sampling and adaptive regulariza-tion methods, BPS is able to simultaneously consider plasticity and stability for the continual learning in recommender systems. 1 INTRODUCTION Deep learning-based recommender systems are predominantly trained through offline manner, also known as full batch train-ing [ 25]. However, the periodic execution of offline training man-ner is inefficient, as it demands substantial computational costs and resources. Consequently, there is growing interest in recom-mender systems based on continual learning, which updates models incrementally using only newly added datasets [ 19,22,23]. Con-tinual learning offers the benefit of reduced computational costs and resource usage but suffers from the issue of catastrophic for-getting, where previously acquired knowledge is lost [ 4,9]. There-fore, research in continual learning for recommender systems is also focused on improving stability to mitigate catastrophic forget-ting, similar to the approaches used in the field of computer vision (CV) [21] and natural language processing (NLP) [6]. However, focusing exclusively on stability when applying con-tinual learning in recommender systems is insufficient, as user preferences naturally change over time in real-world scenarios. Continual learning in recommender systems must incorporate the ability to reflect new user preferences, which are not present in past datasets but emerge in newly added datasets [ 5]. This ability is re-ferred to as plasticity [17]. Ifplasticity is neglected and the emphasis is placed solely on alleviating the forgetting phenomenon, an over-stability issue arises, where previous knowledge is overly retained and changes in user preferences are inadequately addressed [ 5,17]. Therefore, compared to continual learning studies in CV and NLP, plasticity holds significant importance in recommender systems. Effective continual learning in recommender systems necessi-tates a balance between plasticity andstability. However, achieving balanced continual learning poses two main challenges:
CIKM_2024_paper_2197_submited.pdf
CIKM '24, October 21-25, 2024, Boise, Idaho, USA Individual preference changes : User preferences change at varying levels individually within recommender systems. Thus, it is essential to model these changes by maintaining a balance between new preferences ( plasticity ) and existing preferences (stability ) for continual learning. Personalized continual learning : While existing continual learning methods may implicitly capture changes in user pref-erence during model training, they cannot explicitly address changes at the individual level [ 7,22,23]. Therefore, a suitable continual learning method that accounts for the personalized balance between plasticity andstability is required. We propose a method for achieving effective continual learn-ing in recommender systems through the BPS ( Balanced Contin-ual Learning in Plasticity and Stability for Recommender System) framework as shown in Figure 1, which considers changes in each user's preferences and applies personalized continual learning based on these changes. First, to capture individual preference changes, we represent user preferences at each point in time as dis-tributions and measure the similarity between these distributions to accurately understand changes in user preferences. Next, we implement personalized continual learning using expe-rience replay and regularization methods. We begin by calculating an adaptive sampling size inversely proportional to the degree of preference change. We then apply an algorithm that samples past user interactions similar to the top-k items recommended by the model at the previous time point. This approach allows for increased experience replay for users with minor preference changes and en-ables the sampling of past datasets that represent users' existing preferences, thereby balancing plasticity andstability. Additionally, by applying appropriate regularization strengths for each user based on their degree of preference change, we can mitigate overfitting in the experience replay method and enhance its effectiveness. In summary, our main contributions are as follows: We introduce the BPS framework, a novel approach for continual learning in recommender systems that balances plasticity and stability. This framework advances beyond prior studies that primarily concentrated on mitigating catastrophic forgetting. We develop a metric to measure changes in user preferences. This allows for precise modeling of the degree of user preference changes at each point in the continual learning process. We propose an adaptive sampling and regularization method for personalized continual learning, which balances plasticity andstability by determining the sample size for each user and employing fine-grained sampling and regularization to reflect preferences accurately. Through various experiments, we demonstrate the superior con-tinual learning performance of BPS compared to baseline mod-els, and we establish the necessity of capturing user preference changes and personalized continual learning. The remainder of this paper is organized as follows. First, we review related works in Section 2 and define the problem statements in Section 3. Next, we explain in detail how the BPS framework considers both plasticity and stability in Section 4. In Section 5, we conduct various experiments to verify the superiority of BPS framework and analyze the experimental results. Finally, Section 6 summarizes this paper and suggests directions for future. 2 RELATED WORK Continual learning in recommender systems can be categorized into regularization-based methods [ 10,19,20,23], experience replay-based methods [1, 11, 15, 22], and model isolation methods [5]. 2. 1 Regularization-based methods Regularization-based methods aim to enhance learning stability by incorporating a specific regularization term to the loss function. For instance, IRMF [ 10] addresses the mathematical complexity resulting from the inter-dependencies between model parameters by adding regularization terms during the retraining matrix factor-ization (MF) models. Inc CTR [ 19] employs a fixed-sized window for continual learning, mitigating forgetting by using knowledge dis-tillation (KD), with the KD-batch serving as the teacher model and KD-self as the student model. Graph SAIL [ 23] leverages KD in graph neural network (GNN)-based recommender models, proposing a technique to maintain the topological information of nodes between the teacher and student models by integrating global structure, lo-cal structure, and self-structure loss terms into the loss function. Additionally, LWC-KD [ 20] introduces a method to preserve both graph structure and layer structure in GNN-based recommender models through contrastive learning. Although existing regularization-based methods have demon-strated acceptable performance, they have limitations in addressing individual preference changes. Specifically, they apply the same intensity of regularization to all users, which can lead to over-stability for those with significantly changing preferences. In other words, these methods do not account for personalized regulariza-tion strengths based on individual preference changes. 2. 2 Experience replay-based methods The experience replay-based methods explicitly preserve old knowl-edge by incorporating a small portion of past datasets into the current training datasets. GAG [ 15] introduces the Wasserstein reservoir, a sampling technique based on the absence period of items. Similarly, Fe SAIL [ 22] proposes the staleness-aware sam-pling technique, which heuristically samples items with feature absent in the current training dataset. ADER [ 11] assumes that frequently occurring items in the past are influential. This model determines adaptive sampling sizes for each item based on their influence and achieves superior performance by training with the current training data. Inverse Degree [ 1] proposes a sampling tech-nique for GNN-based recommender systems that preserves graph structure by applying a sampling probability proportional to the user's inverse degree in the interaction graph. The experience replay methods effectively enhance stability by directly utilizing previous experiences to recover the model's old knowledge. However, placing excessive emphasis on historical datasets can lead to overfitting, causing the model to inadequately capture the characteristics of new datasets. 2. 3 Model isolation-based methods The model isolation-based method aims to learn the lateral connec-tions of past models during the process of learning new information. Ostapenko et al. [ 12] introduce the trade-off between knowledge retention (i. e., stability ) and knowledge expansion (i. e., plasticity ),
CIKM_2024_paper_2197_submited.pdf
Balanced Continual Learning in Plasticity and Stability for Recommender System CIKM '24, October 21-25, 2024, Boise, Idaho, USA (a) Changes in embedding of dynamic users and static users over time (b) Preference change rates between dynamic and static users over time Figure 2: Visualization of individual preference changes over time. (a) The embedding changes of the top three dynamic users with the largest preference changes and the top three static users with the smallest preference changes are visualized by using t-SNE. (b) Then, the preference change rates, calculated by Jensen-Shannon Divergence (JSD), of the top 20% of dynamic users and the top 20% of static users are represented in a histogram. proposing an effective expansion strategy by combining layer pro-jection for previous tasks with layer accumulation for new tasks. BNS [ 14] adopts a continual learning approach that incorporates re-inforcement learning during task transitions to prevent catastrophic forgetting and facilitate knowledge transfer. Reflecting this trend in continual learning, a recent study, called DECG [ 5], explores model isolation in recommender systems. DEGC proposes a contin-ual learning method in graph-based recommender models through graph convolution pruning and expansion. Model isolation can handle various tasks without significant structural changes, but maintaining detailed models for each task can incur additional costs. Furthermore, when dealing with fre-quent continual learning cycles or large datasets, its drawbacks become more pronounced compared to other methods. Identifying model isolation as a suitable solution for recommender systems with frequent task changes also presents significant challenges. 3 PROBLEM STATEMENT We address the challenge of balancing plasticity and stability in continual learning, a crucial aspect that existing studies have not adequately considered. While it is essential to preserve existing knowledge by enhancing memory stability, it is equally important to maintain learning plasticity for the flexible acquisition of new information [ 17]. Therefore, the ultimate aim of continual learning is to optimize the model's performance across both previous andnew tasks. Given the inherent trade-off between plasticity andsta-bility [17], balancing these aspects in accordance with the model's objectives is essential. We examine the differing priorities in continual learning across various domains, specifically highlighting the contrast between CV, NLP, and recommender systems. In CV and NLP, the focus is predominantly on minimizing performance degradation on pre-vious tasks and emphasizing stability [2,6,21]. Conversely, in recommender systems, learning plasticity is more critical due to the dynamic nature of user preferences. Unlike CV and NLP tasks, recommender systems prioritize predicting future user preferences over analyzing past user behaviors [ 13]. Additionally, recommender systems necessitates a balance between plasticity andstability for continual learning, tailored to the unique needs of each user. For instance, maintaining stability is crucial for users with consistent preferences, whereas enhancing plasticity is vital for users whose preferences change frequently. The first step towards achieving a balance between plasticity and stability in continual learning for recommender systems is capturing individual preference changes. To analyze these changes for each user in a real-world scenario, we utilize the Movie Lens dataset to measure preference shifts over time, as shown in Fig-ure 2. We calculate the Jensen-Shannon Divergence (JSD) for each user by comparing the trained embedding vectors at consecutive time points, ๐‘ก๐‘›and๐‘ก๐‘›-1. This JSD value quantifies user preference changes at each time point, allowing us to categorize users as dy-namic (with significant preference changes) or static (with minimal preference changes). Figure 2(a) visualizes user embedding over time using the t-SNE1, revealing a clear shift for dynamic users between๐‘ก๐‘›and๐‘ก๐‘›-1, while static users show little to no shift. Fig-ure 2(b) presents a histogram of JSD values at each time point, illustrating a distinct separation between dynamic and static users. Figure 2 illustrates that the magnitude of preference changes varies significantly among users in recommender systems. While several studies have integrated time functions into MF models to dynamically model user bias, item bias, and user latent vectors [ 8], or analyzed user behavior sequences to capture these changes [ 24], these methods primarily focus on offline learning at specific time points. Consequently, they are not well-suited for continual learning scenarios where datasets are incrementally provided. The absence of a method to measure user preference changes in recommender systems hinders personalized continual learning. Ex-isting studies on continual learning for recommender systems typi-cally apply uniform regularization strength to all users [ 20,23] or replay existing knowledge at the overall model level [ 1,11,22], with-out accounting for individual preference changes. Consequently, these approaches are not optimized for personalized continual learn-ing in recommender systems, which can lead to over-stability for dynamic users and over-plasticity for static user. In summary, effective continual learning in recommender sys-tems requires more than just mitigating catastrophic forgetting. In other words, it must consider the balance between plasticity and stability. Therefore, to achieve this goal, it is essential to precisely track and reflect the evolving preferences of each user and optimize personalized continual learning based on these individual changes. 1https://scikit-learn. org/stable/modules/generated/sklearn. manifold. TSNE
CIKM_2024_paper_2197_submited.pdf
CIKM '24, October 21-25, 2024, Boise, Idaho, USA Neural networks(a)User preference change detector (b)PAS (c)PAR Time 1... Timet-1 Loss Dynamic Static Timet ๐ฎ๐‘ก-1 ๐ฎ๐‘ก๐ฉ๐‘ก-1๐ฉ๐‘กJSD ๐‘ข๐‘ก๐›ฟ๐‘ข๐‘ก Normalization ๐ท๐‘ก๐ธ๐‘ก๐‘†๐‘ข๐‘กAdaptive sampling ๐ฎ๐‘ก21-๐›ฟ๐‘ข๐‘ก ๐ฟ๐ต๐‘ƒ๐‘…Adaptive regularization User Item Historical interaction New interaction Backprop. ๐ฟ๐‘ƒ๐ด๐‘…Replay Figure 3: Overview of BPS Framework. Theblack line represents the flow of interaction data. The blue line is related to the measurement and reflection of changes in user preferences. The red line represents the actual input data to the neural network. The yellow line and green line indicate the strength of regularization according to the degree of change in user preferences. The purple line indicates the forward and backward propagation flow of the model. 4 PROPOSED METHOD We propose a framework for continual learning in recommender systems, named BPS. The primary objective of BPS is to achieve an optimal balance between plasticity andstability. This objective is accomplished through the accurate measurement of user preference changes, as well as the implementation of personalized adaptive sampling (PAS) and personalized adaptive regularization (PAR). 4. 1 Overall Structure of BPS Framework Figure 3 provides an overview of the BPS framework, which com-prises three key components: (a) the user preference change de-tector, (b) PAS, and (c) PAR. Firstly, Figure 3(a) illustrates the user preference change detector component, which is elaborated upon in Section 4. 2. To estimate users' preferences at each time point, we begin by learning the embedding u๐‘กfor each user ๐‘ขat time๐‘ก by training the model with the new interactions at that time point. This learned embedding is then converted into a probability distri-bution p๐‘ก. Subsequently, we compute the ๐ฝ๐‘†๐ท๐‘ก๐‘ขbetween p๐‘ก-1and p๐‘ก. Finally, we use ๐›ฟ๐‘ก๐‘ขas the measure of preference change for user ๐‘ขat time๐‘ก. Figure 3(b) shows the PAS algorithm, which is discussed in Sec-tion 4. 3. We employ ๐›ฟ๐‘ก๐‘ขto determine the sampling size ๐‘†๐‘ก๐‘ขfrom each user's historical interactions, ensuring that the samples are representative of each user's historical preferences. This approach allows static users to effectively preserve their past preferences with a larger sample size. Finally, we implement the experience replay method by training the model with ๐ธ๐‘ก, sampled from past interactions, along with the new interactions ๐ท๐‘ก. Figure 3(c) presents the PAR component, discussed in Section 4. 4. By utilizing the change calculated in (a) as 1-๐›ฟ๐‘ก๐‘ข, we compute the loss๐ฟ๐‘ƒ๐ด๐‘…, which considers the individual regularization strength. This loss,๐ฟ๐‘ƒ๐ด๐‘…, is then added to the training loss, ๐ฟ๐ต๐‘ƒ๐‘…, to imple-ment the PAR. This approach allows us to reduce the regularization strength, thereby accommodating more new preferences for dy-namic users. 4. 2 User preference changes detector Given that user preferences change differently over time, it is crucial to measure these changes and assess their significance for each user. To achieve this, it is essential to accurately model the extent of change in user preferences. To accurately model changes in user preferences, we use the user's embedding vector at each time point. Let Uand Vbe a set of users and a set of items, respectively. Then we define a ๐‘‘-dimensional embedding vector of user ๐‘ขโˆˆU and item๐‘ฃโˆˆV at time๐‘กas follows: u๐‘ก,v๐‘กโˆˆR๐‘‘(1) These embeddings are utilized to model users and items at each time point. Additionally, they help to resolve the issue of data sparsity that can occur when directly comparing interaction data, and allow for a more effective representation of user preferences. We denote by U๐‘กโІU the set of users who have interacted with at least one item until time ๐‘ก. Next, we compare the distributions of user embedding vectors to measure the changes in user preferences. This comparison is conducted using the JSD [ 3]. JSD provides a precise measure of the change in user preferences because it calculates an averaged distance that takes into account the mixture of the two distributions, which can better reflect the central tendency of the change. The distribution p๐‘ก๐‘ขis calculated only for users ๐‘ขโˆˆU๐‘ก-1because new users who appear at time ๐‘กcannot compared to the past. The distribution p๐‘ก๐‘ขis defined by a softmax function ๐œŽ, where the ๐‘˜๐‘กโ„Ž
CIKM_2024_paper_2197_submited.pdf
Balanced Continual Learning in Plasticity and Stability for Recommender System CIKM '24, October 21-25, 2024, Boise, Idaho, USA probability of the distribution p๐‘ก๐‘ขis defined as follows: ๐œŽ๐‘˜(u๐‘ก)=exp(u๐‘ก ๐‘˜) ร๐‘‘ ๐‘—=1exp(u๐‘ก ๐‘—)(2) Subsequently, the user's preference change over time ๐ฝ๐‘†๐ท๐‘ก๐‘ขis measured by the JSD between the preference distributions p๐‘ก๐‘ขand p๐‘ก-1๐‘ขas follows: ๐ฝ๐‘†๐ท๐‘ก ๐‘ข=1 2๐พ๐ฟ๐ท(p๐‘ก ๐‘ขโˆฅp๐‘ข)+1 2๐พ๐ฟ๐ท(p๐‘ก-1 ๐‘ขโˆฅp๐‘ข) (3) Here,๐พ๐ฟ๐ท represents the Kullback-Leibler Divergence, and p๐‘ข=p๐‘ก๐‘ข+p๐‘ก-1๐‘ข/2is the mean distribution of p๐‘ก๐‘ขand p๐‘ก-1๐‘ข. Consequently, ๐ฝ๐‘†๐ท represents the distance between the two dis-tributions, with the larger the preference change resulting in a larger๐ฝ๐‘†๐ท. Conversely, a user with a low ๐ฝ๐‘†๐ท value indicates a small preference change. The ๐ฝ๐‘†๐ท-based user modeling can more accurately measure user preference changes. However, if we simply apply a certain threshold to the change in user preference to classify users into two types (i. e., dynamic & static), it may lead to informa-tion distortion due to the threshold setting. Therefore, to effectively utilize๐ฝ๐‘†๐ท, we use๐›ฟ, which is a min-max normalization function ๐‘“applied to the ๐ฝ๐‘†๐ท value at each time point, and is defined as follows: ๐›ฟ๐‘ก ๐‘ข=๐‘“(๐ฝ๐‘†๐ท๐‘ก ๐‘ข) (4) By utilizing ๐›ฟ๐‘ก๐‘ข, we can represent the change in users' preferences as a relative magnitude between users while avoiding information distortion due to threshold settings. These ๐›ฟ๐‘ก๐‘ขare used as inputs to PAS in Section 4. 3 and PAR in Section 4. 4 as a balance metric between existing and new preferences. 4. 3 PAS: Personalized Adaptive Sampling In existing continual learning, replay-based methods that reflect previously preferred knowledge at a higher rate within a fixed sampling size have shown high performance [ 11,22]. While this effectively preserves the model's knowledge, it does not consider the balance between plasticity andstability. Therefore, a new replay method that applies ๐›ฟ๐‘ก๐‘ขis required. We propose a method that samples inversely proportional to the magnitude of the change in an individual user's preference at a fixed sample size, which is defined as follows: ๐‘†๐‘ก ๐‘ข= ๐‘ยท๐œŽ๐‘ข(-๐œ†ยท๐›ฟ๐‘ก) (5) ๐‘represents the total fixed sampling size of historical interactions to be replayed, and ๐œŽ๐‘ข(-๐œ†ยท๐›ฟ๐‘ก)represents the sampling rate ac-cording to๐›ฟ๐‘ก๐‘ขfor each user ๐‘ขโˆˆU๐‘ก-1. ๐œ†is a hyperparameter that controls the scale. ๐‘†๐‘ก๐‘ขindicates the sampling size for user ๐‘ขat time๐‘ก, which increases as๐›ฟ๐‘ก๐‘ขdecreases. In other words, for users with small preference changes, we increase the sampling size to strengthen memory sta-bility, and for users with large preference changes, we increase learning plasticity by allowing more new preferences to be incorpo-rated. As a result, ๐›ฟ๐‘ก๐‘ขcan lead to fine-grained sampling size adjust-ments for each user to strike the right balance between plasticity andstability. So far, we've determined the sampling size for each user. Now, we need to decide which historical interactions to sample. First, we Algorithm 1: Personalized Adaptive Sampling Require: Data block D๐‘ก-1=โˆช๐‘‡<๐‘ก๐ท๐‘‡up to time๐‘ก-1 Ensure : Sampled data ๐ธ๐‘กfor time๐‘ก 1๐ธ๐‘กโ†โˆ… 2U๐‘ก-1โ†Interacted users ๐‘ข:(๐‘ข,๐‘ฃ)โˆˆD๐‘ก-1 3foreach user๐‘ขโˆˆU๐‘ก-1do 4V๐‘ก-1 topkโ†Top-k recommendation items for user ๐‘ข 5 vโ†1 ๐‘˜ร ๐‘ฃโˆˆV๐‘ก-1 topkv๐‘ก-1 6๐‘†๐‘ก๐‘ขโ† ๐‘ยท๐œŽ๐‘ข(-๐œ†ยท๐›ฟ๐‘ก๐‘ข) 7V๐‘ก-1๐‘ขโ†Interacted items ๐‘ฃ:(๐‘ขโ€ฒ,๐‘ฃ)โˆˆD๐‘ก-1,๐‘ข=๐‘ขโ€ฒ 8 while๐‘†๐‘ก๐‘ข>0do 9๐‘ฃโ†-arg min ๐‘ฃโˆˆV๐‘ก-1๐‘ขโˆฅv๐‘ก-1-vโˆฅ2 10๐‘†๐‘ก๐‘ขโ†๐‘†๐‘ก๐‘ข-1 11V๐‘ก-1๐‘ขโ†V๐‘ก-1๐‘ข-{๐‘ฃ} 12๐ธ๐‘กโ†๐ธ๐‘กโˆช{(๐‘ข,๐‘ฃ)} 13 end 14end 15return๐ธ๐‘ก can consider the simplest method, which involves randomly sam-pling historical interactions by a size calculated from Equation (5). While this method can lead to impressive performance gains, it has the inherent limitation of randomly sampling both old and new knowledge. Therefore, although random sampling can reflect a variety of knowledge, it is not an effective method for precisely reflecting old preferences to enhance stability. To improve this, we propose new sampling algorithm as shown in Algorithm 1. For each user at time ๐‘ก, we perform experience replay by sampling interactions that are similar to the average embedding of the model's top-k recommended items at the previous time๐‘ก-1, prioritizing items with which the user has interacted in the past. This method trains the model to recommend items similar to those consumed over the long term for users with small preference changes and to recommend items consumed in the short term for users with large preference changes. Since this method is based on the recommendations of the previous model, it enhances the necessary knowledge for the model through repeated continual learning processes, providing better recommendation results and helping to extract representative past behaviors. Algorithm 1 provides a detailed explanation of this process. In lines 4 and 5, the average embedding vof the top-k items recom-mended at time ๐‘ก-1for user๐‘ขis computed. In line 6, the sampling size for the user is determined. Then, in lines 7 to 13, the most similar item interactions to vamong the user's past interactions are repeatedly sampled. The L2 norm is used as the similarity met-ric between the item average embedding and the target item for simplicity of calculation. Through this process, we finally obtain ๐ธ๐‘ก, the samples to be used for experience replay at the current time.
CIKM_2024_paper_2197_submited.pdf
CIKM '24, October 21-25, 2024, Boise, Idaho, USA Table 1: Details of dataset. Dataset Gowalla Foursquare ML 1M User 29,858 51,919 6,040 Items 40,988 37,320 3,260 Interactions 1,027,464 2,348,893 998,539 Time span 20 Months 21 Months 3 Years Sparsity 0. 9992% 0. 9988% 0. 9493% Our proposed PAS method adjusts the sampling size based on each user's preference changes. While previous studies have sug-gested sampling methods suitable for maintaining existing knowl-edge at the model-wide level, we propose a new method that sam-ples inversely proportional to the degree of preference change at the individual user level. This enables for balanced continual learning according to the degree of each user's preference changes. 4. 4 PAR: Personalized Adaptive Regularization In general, regularization is critical to preventing model overfit-ting and improving generalization performance. Existing continual learning studies use regularization to preserve topological infor-mation in the graph structure [ 20,23]. However, this approach has the limitation of applying the same regularization strength to all users. This can lead to inappropriate regularization strengths and overlook different user preference changes, despite the fact that each user has different traits. In this paper, we propose a method to adjust the regularization strength for each user. Specifically, it applies different levels of reg-ularization strength based on the degree of change in each user's preferences, considering a balance between plasticity andstability. We adjust the regularization strength for the user's embedding vec-tor, which is calculated using Equation (6). This utilizes ๐›ฟ๐‘ก๐‘ขmeasured in Equation (4), thus incurring no additional cost. ๐ฟ๐‘ƒ๐ด๐‘…=โˆ‘๏ธ ๐‘ขโˆˆU๐‘กโˆฅu๐‘กโˆฅ2(๐›ผ+๐›ฝ(1-๐›ฟ๐‘ก ๐‘ข)) (6) ๐›ผand๐›ฝare hyperparameters that control the strength of the regularization. The value of ๐›ผdetermines the default strength of the applied regularization, while the value of ๐›ฝadjusts the strength of regularization by affecting ๐›ฟ๐‘ก๐‘ข. Then, the model is trained by adding Equation (6) to the final loss function, as follow: ๐ฟ=๐ฟ๐ต๐‘ƒ๐‘…+๐œ†๐‘Ÿ๐‘’๐‘”๐ฟ๐‘ƒ๐ด๐‘… (7) where๐ฟ๐ต๐‘ƒ๐‘…is the Bayesian Personalized Ranking (BPR) loss func-tion, which is widely used as a loss function in recommender sys-tems [23]. ๐œ†๐‘Ÿ๐‘’๐‘”is a hyperparameter for ๐ฟ๐‘ƒ๐ด๐‘…. Our proposed method combines replay and regularization tech-niques. Replay method increases the stability of the model by re-playing historical interactions. However, no matter how carefully the sampling is done, there is a risk of overfitting because the over-all past preference distribution cannot be fully taken into account. Regularization can mitigate this risk. Our proposed PAR prevents the model from over-adapting to dynamic users and improves gen-eralization performance. This allows the model to balance plasticity andstability for better recommendation performance. Train Val Test Block1Base block (D0) Block2... Block3Incremental block (D1,D2,D3,โ‹ฏ) ๐‘ก0 Base model ๐‘ก1 Cont. model1Val Test Knowledge transfer Cont. model2 ๐‘ก2Val Test......Train Train Knowledge transfer... E2E1Figure 4: Experimental setup. 5 EXPERIMENT 5. 1 Dataset Our experiments are conducted on Gowalla, Foursquare, and Movie-Lens (ML) 1M, which are commonly used in recommender systems. These datasets are often utilized in research on continual learning and consist of a large number of real-world user-item interactions. As shown in Table 1, each dataset has different users, items, num-ber of interactions, duration (time span), and sparsity. Detailed descriptions of these datasets are as follows. Gowalla2: Gowalla is a dataset collected from location-based social networks (LBSNs) where users share check-in informa-tion about various categories of places. It is widely used for researching implicit feedback-based recommender systems and is sometimes specialized for specific locations (i. e., restaurants) to optimize performance. Notably, it has the highest sparsity compared to other datasets. Foursquare3: Foursquare, along with Gowalla, is a representa-tive dataset used to validate the performance of continual learn-ing in recommender systems. It comprises the largest number of users and interactions. ML 1M4: ML is the most commonly used dataset for recom-mender systems research provided by Group Lens, consisting of ratings between users and items. This dataset has the highest density compared to the other datasets. Specifically, to ensure meaningful performance validation, we preprocess the datasets using k-core before the experiments. We set the threshold to 10 for Gowalla and ML 1M, and 20 for Foursquare. For a pure collaborative filtering, all interactions are treated as implicit feedback. 5. 2 Experimental setup We conduct experiments to verify the effectiveness of the BPS proposed in this paper, and the experimental setup is shown in Figure 4. First, we utilize 60% of the data as the base block, sorted in chronological order. Then, the remaining 40% of the data is divided 2https://snap. stanford. edu/data/loc-gowalla. html 3https://sites. google. com/site/yangdingqi/home/foursquare-dataset 4https://grouplens. org/datasets/movielens/1M/
CIKM_2024_paper_2197_submited.pdf
Balanced Continual Learning in Plasticity and Stability for Recommender System CIKM '24, October 21-25, 2024, Boise, Idaho, USA into four retraining blocks (i. e., incremental blocks) in equal ratio. The base block is trained on an 8:1:1 split for train, valid, and test sets. For the validation process, we select a method that is commonly utilized in continual learning for recommendation systems. During the retraining the model at time ๐‘ก, the block for retraining ๐ท๐‘กis used as the training dataset, and the block for retraining at time ๐‘ก+1is split into two datasets with a ratio of 50% for validation and testing. This is an essential experiment for recommender systems, analyzing how well the model predicts future preferences for each user. Effectiveness is measured using Recall and NDCG (Normalized Discounted Cumulative Gain) for the top 20 recommendations. Re-call evaluates how well users' relevant preferences are reproduced, regardless of their rank. NDCG measures how well the overall rank-ing of users' preferences are reflected in their recommendations. Finally, our experimental process is implemented using Python 3. 9 on an NVIDIA Ge Force RTX 3090 GPU with the Ubuntu 20. 04 operating system. The models are constructed using Tensor Flow 2. 13. We set the learning rate to 0. 0005, the dropout rate to 0. 2, the regularization parameter ๐œ†๐‘Ÿ๐‘’๐‘”to 0. 001, the embedding dimension ๐‘‘ to 128, the sampling scaler ๐œ†to 1, and the PAR parameters ๐›ผto 0. 7 and๐›ฝto 0. 3. The sampling size ๐‘is set to 30,000 for all datasets, ensuring consistency across different baselines for fair comparative experiments. 5. 3 Baselines For the experiments, we use Neural Graph Collaborative Filtering (NGCF) [ 18], which is often used for continual learning in recom-mender systems, as the backbone. The baselines used in this paper for comparative evaluation are as follows: Finetune : The existing recommender model is retrained using only the new interactions. This allows us to assess the accuracy of recommendations obtained without utilizing any continual learning methods. Uniform : This method randomly selects past interactions within a defined sampling size during the retraining process and incorporates them into learning, representing the most ba-sic performance of replay-based methods. Graph SAIL [23]: A representative regularization-based method that focuses on preserving the local, global, and self topological information in graph-based recommendation models. SGCT [20]: A regularization-based method that performs con-trastive learning on the embeddings of each node with the aim of knowledge distillation. It attempts to construct positive samples only from user-item graph. MGCT [20]: Similar to SGCT, this regularization-based method utilizes contrastive learning for knowledge distillation. However, it derives positive samples from different graphs, such as item-item graph, in addition to user-item graph. LWC-KD [20]: A regularization-based method built on MGCT, introducing intermediate layer distillation to consider layer-level supervision. Continual GNN [16]: A representative replay-based continual graph learning method that effectively adjusts experience replay and knowledge distillation to integrate existing patterns. Inverse Degree [1]: A replay-based method sets the sampling probability of each interaction to be proportional to the inverse degree of the user on the interaction graph. DEGC [5]: A model isolation-based method aims to optimize new interactions based on graph convolution pruning and ex-pansion. 5. 4 Overall performance Table 2 shows the performance results of BPS compared to base-line methods to verify its superiority. 'Avg' represents the average performance of Blocks 1-3, and 'Imp' calculates the performance improvement rate compared to Finetune. In the Gowalla dataset, Graph SAIL in regularization-based meth-ods and Inverse Degree in replay-based methods perform quite remarkably. In contrast, DEGC, a model isolation-based method, shows relatively weaker performance. These results indicate that regularization and replay methods are suitable for improving perfor-mance on the Gowalla dataset. Our BPS achieves the highest perfor-mance by considering both replay and regularization. Specifically, We achieve an approximately 11. 72% improvement over Finetune, a 4. 91% improvement over Graph SAIL, and an 8. 86% improvement over Inverse Degree. In the Foursquare dataset, both replay-based methods, Uniform and Continual GNN, show remarkable performance. Specifically, even simple random sampling like Uniform achieves high perfor-mance, indicating that increasing stability to maintain users' exist-ing preferences from this dataset is advantageous. In this aspect, Continual GNN maintains users' existing preferences well, result-ing in high performance. However, our proposed BPS achieves the highest performance compared to other methods in this dataset. Specifically, BPS exhibits performance improvements of about 1. 81% over Continual GNN, a remarkable replay-based method on the Foursquare dataset, due to its optimization of personalized plas-ticity andstability. Additionally, BPS demonstrates about a 6. 55% improvement compared to the regularized-based method, Graph-SAIL, and enhances performance by approximately 8. 96% and 1. 57% compared to Finetune and Uniform, respectively. In the ML 1M dataset, the performance of Finetune and Uniform reverses as the blocks progresses. Specifically, Finetune achieves the highest performance in Block 1 due to most users focusing on high plasticity, emphasizing new preference information. The proposed BPS can consider these preference changes and reflect the balance of individual users, resulting in high performance. On the other hand, Continual GNN, which performs well in block 1, experiences a performance drop in Block 2, because it fails to adapt to the reversal in the importance of plasticity andstability. However, our proposed method demonstrates robustness to these changes through its performance. Comparing the overall performance in ML 1M, BPS improves by about 4. 44% over the replay-based Continual-GNN and about 4. 58% over the regularization-based Graph SAIL. In particular, it shows an approximately 6. 59% performance improve-ment compared to simply applying Finetune. In summary, our BPS demonstrates overall excellence across mul-tiple metrics on various datasets. While replay-and regularization-based methods perform well in the Gowalla and Foursquare datasets,
CIKM_2024_paper_2197_submited.pdf
CIKM '24, October 21-25, 2024, Boise, Idaho, USA Table 2: Overall performance. Graph SAIL, SGCT, MGCT are regularization-based, Continual GNN, Inverse Degree are replay-based, DECG is a model isolation-based. Dataset Method Recall@20โ†‘ NDCG@20โ†‘ Block1 Block2 Block3 Avg Imp Block1 Block2 Block3 Avg Imp Gowalla Finetune 0. 0631 0. 0712 0. 0741 0. 0695-0. 0210 0. 0196 0. 0179 0. 0195-Uniform 0. 0691 0. 0713 0. 0738 0. 0714 +2. 79% 0. 0203 0. 0178 0. 0182 0. 0188-3. 79% Graph SAIL 0. 0757 0. 0746 0. 0752 0. 0751 +8. 17% 0. 0229 0. 0196 0. 0187 0. 0204 +4. 67% SGCT 0. 0644 0. 0681 0. 0677 0. 0667-3. 95% 0. 0178 0. 0167 0. 0165 0. 0170-12. 78% MGCT 0. 0582 0. 0723 0. 0723 0. 0676-2. 66% 0. 0166 0. 0182 0. 0172 0. 0173-11. 11% LWC-KD 0. 0587 0. 0639 0. 0706 0. 0644-7. 29% 0. 0170 0. 0166 0. 0168 0. 0168-13. 77% Continual GNN 0. 0675 0. 0712 0. 0694 0. 0693-0. 17% 0. 0197 0. 0181 0. 0173 0. 0184-5. 79% Inverse Degree 0. 0687 0. 0776 0. 0772 0. 0745 +7. 24% 0. 0192 0. 0196 0. 0184 0. 0191-2. 10% DEGC 0. 0649 0. 0712 0. 0749 0. 0703 +1. 27% 0. 0185 0. 0181 0. 0184 0. 0183-6. 06% BPS (Ours) 0. 0788 0. 0839 0. 0838 0. 0821 +18. 25% 0. 0212 0. 0204 0. 0199 0. 0205 +5. 19% Foursquare Finetune 0. 1051 0. 1175 0. 1499 0. 1242-0. 0522 0. 0585 0. 0746 0. 0618-Uniform 0. 1188 0. 1246 0. 1598 0. 1344 +8. 23% 0. 0578 0. 0615 0. 0777 0. 0657 +6. 33% Graph SAIL 0. 1132 0. 1200 0. 1478 0. 1270 +2. 27% 0. 0567 0. 0598 0. 0730 0. 0632 +2. 26% SGCT 0. 1110 0. 1151 0. 1336 0. 1199-3. 42% 0. 0557 0. 0582 0. 0659 0. 0599-2. 95% MGCT 0. 1162 0. 1207 0. 1418 0. 1262 +1. 65% 0. 0604 0. 0613 0. 0693 0. 0637 +3. 10% LWC-KD 0. 1132 0. 1184 0. 1405 0. 1240-0. 11% 0. 0584 0. 0608 0. 0713 0. 0635 +2. 80% Continual GNN 0. 1204 0. 1270 0. 1525 0. 1333 +7. 36% 0. 0600 0. 0630 0. 0747 0. 0659 +6. 68% Inverse Degree 0. 1075 0. 1184 0. 1526 0. 1262 +1. 61% 0. 0543 0. 0569 0. 0716 0. 0609-1. 35% DEGC 0. 1163 0. 1197 0. 1548 0. 1303 +4. 92% 0. 0579 0. 0591 0. 0748 0. 0639 +3. 48% BPS (Ours) 0. 1203 0. 1283 0. 1582 0. 1356 +9. 21% 0. 0595 0. 0641 0. 0779 0. 0671 +8. 70% ML 1MFinetune 0. 0695 0. 0584 0. 0546 0. 0609-0. 0093 0. 0079 0. 0264 0. 0145-Uniform 0. 0582 0. 0590 0. 0615 0. 0596-2. 15% 0. 0086 0. 0075 0. 0285 0. 0149 +2. 32% Graph SAIL 0. 0638 0. 0610 0. 0620 0. 0623 +2. 26% 0. 0088 0. 0079 0. 0277 0. 0148 +1. 99% SGCT 0. 0480 0. 0365 0. 0417 0. 0421-30. 89% 0. 0081 0. 0063 0. 0241 0. 0128-11. 77% MGCT 0. 0574 0. 0523 0. 0527 0. 0541-11. 11% 0. 0088 0. 0072 0. 0262 0. 0140-3. 32% LWC-KD 0. 0591 0. 0616 0. 0479 0. 0562-7. 68% 0. 0088 0. 0083 0. 0255 0. 0142-2. 26% Continual GNN 0. 0668 0. 0555 0. 0647 0. 0623 +2. 40% 0. 0091 0. 0080 0. 0290 0. 0154 +6. 06% Inverse Degree 0. 0571 0. 0599 0. 0584 0. 0585-3. 95% 0. 0085 0. 0076 0. 0293 0. 0151 +4. 15% DEGC 0. 0654 0. 0633 0. 0550 0. 0612 +0. 59% 0. 0086 0. 0087 0. 0272 0. 0148 +2. 22% BPS (Ours) 0. 0669 0. 0663 0. 0622 0. 0651 +6. 59% 0. 0093 0. 0085 0. 0293 0. 0157 +8. 09% BPS provides more accurate recommendations through personal-ized continual learning optimized for each user. In the ML 1M dataset, BPS proves its superiority by showing robustness com-pared to other methods, whose performance significantly drops due to rapid changes in users' preferences as the blocks progress. Therefore, BPS is the most suitable approach for personalized con-tinual learning in recommender systems. 5. 5 Ablation study We conducted an ablation study to determine the impact of the components from BPS on performance. Table 3 shows the results of the comparison, measuring average Recall and NDCG for the top 20 recommendations from Gowalla, Foursquare, and ML 1M. Finetune represents the pure results without the proposed method, and the experiments (Exp) involve cross-evaluating the proposed components. U is the random sampling of Equation 5 with the user preference detector. PAS represents the result of applyingthe sampling algorithm introduced in Algorithm 1. Finally, PAR considers personalized adaptive regularization through Equation 7. Basically, even with just the U component as in Exp1, there was an overall improvement compared to Finetune. Specifically in the Gowalla dataset, Recall and NDCG achieves performance improvements of approximately 8. 18% and 1. 81%, respectively. This demonstrates that the personalized sampling size of the U compo-nent effectively tracks the balance between plasticity andstability, highlighting the necessity of personalized continual learning. In Exp2, representative interactions of the user's past preferences are selected instead of random sampling in the U component. Compared to Exp1, the results show both increases and decreases depending on the case. This indicates that sampling optimized to user prefer-ences accelerates stability but occasionally leads to over-stability. Exp3, which adds PAR for regularization to Exp1, generally leads to performance improvements, but there are instances of declining trends, such as in Recall for Foursquare. This occurs because the random knowledge sampled by the U component is regularized, and to maximize the effectiveness of PAR, it should be applied to
CIKM_2024_paper_2197_submited.pdf
Balanced Continual Learning in Plasticity and Stability for Recommender System CIKM '24, October 21-25, 2024, Boise, Idaho, USA Table 3: Ablation study. U is the random sampling of equation 5 with the user preference detector Avg Component Recall@20 โ†‘ NDCG@20โ†‘ U PAS PAR Gowalla Foursquare ML 1M Gowalla Foursquare ML 1M Finetune 0. 0695 0. 1242 0. 0609 0. 0195 0. 0618 0. 0145 Exp1โˆš0. 0751 0. 1306 0. 0625 0. 0199 0. 0650 0. 0155 Exp2โˆš โˆš0. 0726 0. 1335 0. 0610 0. 0209 0. 0640 0. 0149 Exp3โˆš โˆš0. 0818 0. 1266 0. 0625 0. 0184 0. 0652 0. 0157 Exp4 (Ours)โˆš โˆš โˆš0. 0821 0. 1356 0. 0651 0. 0205 0. 0671 0. 0157 Figure 5: Case study. Using Gowalla, (a) and (b) verify the plasticity for 20% dynamic users and the stability for 20% static users, respectively specific knowledge for user preferences. Exp4 stands for the BPS itself, applying PAR to the user's representative past preference knowledge sampled by PAS. This balances plasticity and stabil-ity, resulting in excellent performance. Specifically, in the Gowalla dataset, both Recall and NDCG show performance improvements of approximately 18. 25% and 5. 19% compared to Finetune, indicat-ing that it effectively captures user preferences while accurately reflecting priority preferences. 5. 6 Case study In this subsection, we conduct an experiment to verify whether the proposed BPS accurately captures the balance between plasticity andstability for dynamic and static users, as introduced in Section 3. As shown in Figure 5, we measure recall@20 for the top 20% of dynamic and static users based on JSD from each blocks. Since dynamic users represent rapidly changing preference information, we verified learning plasticity on test sets as in typical continual learning experiments shown in Figure 4. In contrast, since static users have not significantly changed from existing preferences, we verify accuracy up to the current time point. For instance, static users at time ๐‘ก2compared memory stability using data from ๐‘ก0and๐‘ก1. For specific analysis, we evaluated BPS compared to representative regularization, replay, and model-isolation methods that show high performance in the overall experiments on the Gowalla dataset, including Graph SAIL, Inverse Degree, and DEGC. First, from Figure 5(a), when examining recall for dynamic users, Graph SAIL shows the highest performance among the baselines by applying regularization to the overall knowledge without replaying historical interactions. On the other hand, replay-based Inverse Degree causes over-stability for dynamic users, resulting in the lowest performance. However, similar to the overall experimental results, BPS demonstrates the best performance, improving by ap-proximately 8. 06%, 2. 05%, 15. 52%, and 14. 67% compared to Finetune, Graph SAIL, Inverse Degree, and DEGC, respectively. These resultsindicate that BPS effectively considers personalized plasticity by applying small size replay and regularization for dynamic users. Next, we can observe from Figure 5(b) that Inverse Degree, which effectively replays existing preferences, achieves the highest per-formance among the baselines for static users. This indicates that the replay-based method significantly contributes to improving stability. However, in all cases, our BPS performs the best, outper-forming Finetune and Inverse Degree by about 15. 94% and about 6. 28%, respectively. These results are because BPS replays larger and more precise interactions for static users compared to Inverse Degree, thereby enhancing memory stability. To summarize, we demonstrate through the case study that per-sonalized continual learning in recommender systems is necessary to take into account the changes in each user's preferences. In this regard, the proposed BPS not only accurately predicts future prefer-ence, which is the goal of recommender systems, but also effectively reproduces users' existing preferences. Therefore, BPS addresses the main challenge of continual learning in recommender systems, which is to balance plasticity andstability as shown in this study. 6 CONCLUSION In this paper, we introduced a novel framework called BPS, which focuses on balancing plasticity andstability in continual learning for recommender systems. The BPS accurately captures individual preference changes and enables balanced continual learning by incorporating new preferences while preserving existing ones. Ad-ditionally, we have demonstrated the robustness and effectiveness of BPS through extensive experiments. The results highlight that BPS enhances both plasticity andstability. For future work, we aim to develop an end-to-end method that can simultaneously detect user preference changes and train the model without the need for pre-computation of user preference changes.
CIKM_2024_paper_2197_submited.pdf
CIKM '24, October 21-25, 2024, Boise, Idaho, USA REFERENCES [1]Kian Ahrabian, Yishi Xu, Yingxue Zhang, Jiapeng Wu, Yuening Wang, and Mark Coates. 2021. Structure aware experience replay for incremental learning in graph-based recommender systems. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2832-2836. [2]Jihwan Bang, Heesu Kim, Young Joon Yoo, Jung-Woo Ha, and Jonghyun Choi. 2021. Rainbow Memory: Continual Learning With a Memory of Diverse Samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8218-8227. [3]Xu Chen, Junshan Wang, and Kunqing Xie. 2021. Traffic Stream: A Streaming Traf-fic Flow Forecasting Framework Based on Graph Neural Networks and Continual Learning. In International Joint Conference on Artificial Intelligence. [4]Seungyoon Choi, Wonjoong Kim, Sungwon Kim, Yeonjun In, Sein Kim, and Chanyoung Park. 2024. DSLR: Diversity Enhancement and Structure Learning for Rehearsal-based Graph Continual Learning. In Proceedings of the ACM on Web Conference 2024. [5]Bowei He, Xu He, Yingxue Zhang, Ruiming Tang, and Chen Ma. 2023. Dy-namically expandable graph convolution for streaming recommendation. In Proceedings of the ACM Web Conference 2023. 1457-1467. [6]Zixuan Ke, Bing Liu, Wenhan Xiong, Asli Celikyilmaz, and Haoran Li. 2023. Sub-network Discovery and Soft-masking for Continual Learning of Mixed Tasks. In Findings of the Association for Computational Linguistics: EMNLP 2023. 15090-15107. [7]James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114, 13 (2017), 3521-3526. [8]Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization tech-niques for recommender systems. Computer 42, 8 (2009), 30-37. [9]Yilun Liu, Ruihong Qiu, and Zi Huang. 2023. Cat: Balanced continual graph learning with graph condensation. In 2023 IEEE International Conference on Data Mining (ICDM). IEEE, 1157-1162. [10] Xin Luo, Yunni Xia, and Qingsheng Zhu. 2012. Incremental collaborative filter-ing recommender based on regularized matrix factorization. Knowledge-Based Systems 27 (2012), 271-280. [11] Fei Mi, Xiaoyu Lin, and Boi Faltings. 2020. Ader: Adaptively distilled exem-plar replay towards continual learning for session-based recommendation. In Proceedings of the 14th ACM Conference on Recommender Systems. 408-413. [12] Oleksiy Ostapenko, Pau Rodriguez, Massimo Caccia, and Laurent Charlin. 2021. Continual learning via local module composition. Advances in Neural Information Processing Systems 34 (2021), 30298-30312. [13] Danni Peng, Sinno Jialin Pan, Jie Zhang, and Anxiang Zeng. 2021. Learning an adaptive meta model-generator for incrementally updating recommender systems. In Proceedings of the 15th ACM Conference on Recommender Systems. 411-421. [14] Qi Qin, Wenpeng Hu, Han Peng, Dongyan Zhao, and Bing Liu. 2021. Bns: Building network structures dynamically for continual learning. Advances in Neural Information Processing Systems 34 (2021), 20608-20620. [15] Ruihong Qiu, Hongzhi Yin, Zi Huang, and Tong Chen. 2020. Gag: Global at-tributed graph neural network for streaming session-based recommendation. In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval. 669-678. [16] Junshan Wang, Guojie Song, Yi Wu, and Liang Wang. 2020. Streaming graph neu-ral networks via continual learning. In Proceedings of the 29th ACM international conference on information & knowledge management. 1515-1524. [17] Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. 2024. A comprehensive survey of continual learning: Theory, method and application. IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). [18] Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, and Tat-Seng Chua. 2019. Neural graph collaborative filtering. In Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval. 165-174. [19] Yichao Wang, Huifeng Guo, Ruiming Tang, Zhirong Liu, and Xiuqiang He. 2020. A practical incremental method to train deep ctr models. ar Xiv preprint ar Xiv:2009. 02147 (2020). [20] Yuening Wang, Yingxue Zhang, and Mark Coates. 2021. Graph structure aware contrastive knowledge distillation for incremental learning in recommender systems. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 3518-3522. [21] Zhenyi Wang, Yan Li, Li Shen, and Heng Huang. 2024. A Unified and General Framework for Continual Learning. In The Twelfth International Conference on Learning Representations: ICLR 2024. [22] Zhikai Wang, Yanyan Shen, Zibin Zhang, and Kangyi Lin. 2023. Feature staleness aware incremental learning for CTR prediction. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. 2352-2360. [23] Yishi Xu, Yingxue Zhang, Wei Guo, Huifeng Guo, Ruiming Tang, and Mark Coates. 2020. Graphsail: Graph structure aware incremental learning for recommender systems. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2861-2868. [24] Mengqi Zhang, Shu Wu, Xueli Yu, Qiang Liu, and Liang Wang. 2022. Dynamic graph neural networks for sequential recommendation. IEEE Transactions on Knowledge and Data Engineering 35, 5 (2022), 4741-4753. [25] Peiyan Zhang and Sunghun Kim. 2023. A survey on incremental update for neural recommender systems. ar Xiv preprint ar Xiv:2303. 02851 (2023).
CIKM_2024_paper_2197_submited.pdf
README.md exists but content is empty.
Downloads last month
27