Dataset Viewer
text
string | source
string |
---|---|
arXiv:2505.20979v1 [cs.SD] 27 May 2025MelodySim: Measuring Melody-aware Music Similarity for Plagiarism Detection Tongyu Lu∗1, Charlotta-Marlena Geist∗2, Jan Melechovsky1, Abhinaba Roy1, Dorien Herremans1 1Singapore University of Technology and Design 2Otto von Guericke University Magdeburg [email protected], [email protected], [email protected], [email protected], [email protected] ABSTRACT We propose MelodySim , a melody-aware music simi- larity model and dataset for plagiarism detection. First, we introduce a novel method to construct a dataset with focus on melodic similarity. By augmenting Slakh2100; an ex- isting MIDI dataset, we generate variations of each piece while preserving the melody through modifications such as note splitting, arpeggiation, minor track dropout (ex- cluding bass), and re-instrumentation. A user study con- firms that positive pairs indeed contain similar melodies, with other musical tracks significantly changed. Second, we develop a segment-wise melodic-similarity detection model that uses a MERT encoder and applies a triplet neu- ral network to capture melodic similarity. The resultant decision matrix highlights where plagiarism might occur. Our model achieves high accuracy on the MelodySim test set. 1. INTRODUCTION In recent years, the popularity of generative music models has rapidly increased. With the rise of commercial models such as Suno1and Udio2, as well as open source models like Mustango [1] and MusicGen [2], the question of artist- protection question arises. There currently is an ongoing discussion as well as legal battles on how artists should be compensated for the use of their music as training data [3], e.g. Recording Industry Association of America (RIAA) vs. Udio and Suno (June 2024)3. In addition, the mu- sic generated by these models might plagiarize the original training data. In this work, we develop tools that may help with melody-related plagiarism detection. When generative models are trained on (often improp- erly licensed) copyrighted data, it becomes a strong pos- sibility that the generated music plagiarizes the original training data. In particular, diffusion models have shown to be prone to replicate their training data, as shown by [4, 5] on the image generation task. Artists have made public outcries showcasing examples of their work or style repli- cated by generative models4. In literature, we noticed that *These authors contributed equally to this work. 1https://suno.com 2https://udio.com 3https://shorturl.at/YlQ0P 4https://shorturl.at/j5tJbgenerative AI models are typically evaluated in terms of their ability to predict similarly to the input data (accu- racy) rather in terms of the originality of the generated out- put [6]. At the moment there is no clear legal precedent or ruling to tackle the copyright issues on the input data, how- ever, we can examine resulting plagiarism by the output of the generative models. Finding and confirming music plagiarism in general is a complex task. When deciding on plagiarism cases, [7] highlight the necessity of individually considering each case. An automatic plagiarism detection tool could help speed up the process of both flagging new plagiarism cases, as well as confirming expert opinions in existing lawsuits. Such tools might even be integrated into the music gener- ation models themselves to avoid plagiarized output in the first case. This task, however, is not trivial, as there is no generally accepted, objective definition of what plagiarized | https://arxiv.org/abs/2505.20979v1 |
music is. In an analysis of 17 lawsuits, [7] observed that the melody was prioritized when deciding on plagiarism, followed by the ‘overall impression’ of the music. This leads us to believe that there is a need for a melody-aware music similarity tool. The existing work on melody simi- larity metrics, however, is limited to the field of symbolic music (MIDI) [8–10]. To be able to deal with real-life court case data and generated music, we develop an audio based melody-aware similarity model in this work. This task is arguably more challenging than using symbolic music, due to the overlay of multiple audio signals, as well as the lack of data to train the model. The contributions of this work include the creation of a novel dataset, MelodySim, which contains 1568 full length instrumental songs, with three additional variations per song, resulting in a total of 6,272 files. These variations contain slight melodic changes in terms of altered pitches, note durations, speed and instrumentation. These changes are subtle such that the resulting matching tracks can still be construed as ‘plagiarized’ in terms of melody. This data is then used to train a Triplet Neural Network with a MERT encoder [11], that minimizes the distance in representation of matching segments and maximizes the distance between different segments. This results in a melody-aware similar- ity embedding that is then used in a classification model to directly predict matching segments. In the following sections, we first provide an overview of related work. Section 3 then describes how we have created the MelodySim dataset. This is followed by a de- scription of the melody-aware triplet neural network that we developed to predict similar music fragments as well as the full-song plagiarism detection method. Finally, the results of our model on the training set as well as an in- the-wild plagiarism dataset are presented, followed by a conclusion. 2. RELATED WORK In this section, we provide a brief overview of how music plagiarism has been defined in existing literature. We then discuss related work on music similarity detection models. 2.1 What is plagiarism? When developing a model for plagiarism, we have to ask ourselves: which elements of music count towards plagia- rism? A lot of popular rock, pop, and folk music shares the same 3-chord progression: I-IV-V , and has a similar drum track, making these elements non-eligible copyright infringement. This leaves other musical features such as melody and timbre as potential sources for plagiarism. Currently, there does not exist a fixed rule set that de- fines plagiarism in music. In a study by [7], 17 music pla- giarism lawsuits were analyzed. The authors observed that the melody was clearly prioritized when deciding on pla- giarism, but always paired with another parameter which in most cases appeared to be the rhythm. Huber also stated that melody is the most discussed aspect in legal disputes, second to ‘overall impression’, which can be considered as the composition of various musical characteristics. Based on this, we decided to build a melody-aware sim- ilarity metric, that not yet looks at the | https://arxiv.org/abs/2505.20979v1 |
melody, but also encodes the music in general through MERT-features [11]. To achieve this, we carefully constructed a new dataset by thoroughly altering musical features in different levels of detail while maintaining the main melody, as explained in Section 3. 2.2 Automatic Music Similarity Detection Most existing work on melody similarity detection is in the symbolic domain. Much of this work is not necessar- ily developed towards plagiarism detection, but could have other goals such as melody retrieval [8], repeated (exact) sample detection [12]. For a more comprehensive historic overview of music similarity models, the reader is referred to [13]. For instance, [8] developed a music similarity model for that was trained on the Meertens Tune Collections dataset [14]. Their recurrent neural network models al- lowed them to consider melody recommendation as a rank- ing problem of similarity. More recently, [15] present a way to generate an originality report, which includes an originality score (based on cardinality) to evaluate how much a generative symbolic music model copies from its training set. These metrics are then used to inform an early stopping mechanism that cuts of training when the optimal level or ‘originality’ is reached on the validation set, thuspreventing the transformer from generating music that is too similar to its training data. In [9], an image-based approach for solving the task of plagiarism detection based on musical features such as rhythm and melody similarity. The authors used the Lakh MIDI dataset [16] and represented the MIDI into 8-bar units and grayscale images. Generated simulated plagia- rism cases were then generated by reversing and removing operations on note and rhythm vectors as well as note se- quences. This work only considered monophonic instru- mental songs. An interesting in-the-wild dataset for plagiarism in the Music Copyright Infringement Cases (MCIC) dataset [10]. The dataset contains music pairs from 116 court cases ( denied: 66, infringed: 32, settled: 18) in both MIDI as well as score form. In the domain of audio similarity research, [17] and [18] developed similarity techniques based on spectrograms and fingerprinting to tackle plagiarism detection. These methods require high computational power with a large fin- gerprint database and tend to result in low accuracy with decreasing audio quality and higher noise level [9, 12]. The resulting similarity relies on general acoustic features extracted from spectrograms and does not directly distin- guish between specific musical characteristics like melody, rhythm or timbre. Another audio-based similarity approach is the Music Replication Assessment (MiRA) tool [19], which includes several similarity metrics for raw audio. In their experi- ments, embedding-based metrics showed the most promis- ing results in terms of robustness and sensitivity. The scope of their work, however, is limited to exact replications in music audio. Their dataset was generated by putting a frac- tion of a reference track into a random point of a target track. The problem of finding reused samples in other songs was tackled in [12]. Their deep learning approach uses a siamese-based convolutional neural network (CNN) with mel-spectrograms and a triplet loss. Their similarity score based on the resulting embeddings consisted | https://arxiv.org/abs/2505.20979v1 |
of a combina- tion of Euclidean distance, cosine similarity and the Pear- son correlation. The model was trained on the WhoSam- pled5dataset. The task of finding replicated samples is also limited to finding exact repetitions. In this work, we aim to improve upon such an approach by including note-level variations to make the algorithm more robust. Our work build upon the gaps in literature by providing the first open, large-scale synthetic audio dataset for au- dio plagiarism. Each song contains three variations with slight music theory-informed melody changes, that con- tains many small melody variations (altered pitches, note durations, speed and instrumentation) while significantly altering the other tracks and timbre. The subtleness of the melody changes ensures that the paired tracks may be con- strued as plagiarized. This new dataset then allows us to train a triplet neural network-based melody-aware similar- ity model for plagiarism detection directly on audio. 5www.whosampled.com 3. MELODYSIM DATASET In order to be able to create a strong melody-aware em- bedding for audio music, we need a suitable dataset to train on. We used various MIDI and audio augmentations to cre- ateMelodySim , a new audio dataset which contains three variations for each song. These variations aim to keep the melody constant (except for tiny changes for robustness), and change other aspects such as removing tracks, chang- ing instruments, inverting chords, changing the tempo and transposing the composition as shown in Figure 1. We thus aim to capture melodic similarity between otherwise dif- ferent songs, as melody is one of the main plagiarism cri- teria [7]. We used 1568 MIDI files from the Slakh2100 dataset [20] as a base dataset to start the augmentations. In the following subsections the augmentation procedure is described in detail. The final dataset consists of 6,272 full-length audio music files, consisting of original pieces with three additional versions for each piece. The dataset and augmentation code are available online6. 3.1 Step 1 - Melody track identification For each of the multi-track MIDI files, we first identify the melody track by training a machine learning model. Our best performing model is a gradient boosting classi- fier model following the approach presented in [21]. A re- fined CMU Computer Music Analysis Dataset7was used for training the model, where we manually relabeled a por- tion of this dataset after noticing a number of incorrect la- bels. The refined dataset is available online8. Taking [21] as a reference, a number of adjustments were made to the input features, that lead to improved re- sults. First, additional track features including polyphony rate and note activation density were added. Secondly, apart from the features from the current track under in- spection, average features of other tracks in the same MIDI file were also computed and added to the classification inputs as reference-features.Through cross-validation, fi- nally a histogram-based gradient boosting model was se- lected as our model, which reached an accuracy of 97% on the validation split of CMU. Through manual inspections, we found that the model generalized well on Slakh2100. Our melody track identification model is available as open | https://arxiv.org/abs/2505.20979v1 |
source9. 3.2 Step 2 - MIDI-level augmentations Now that we have identified the melody track in Step 1, we are able to perform a number of MIDI augmentations on both the instrument- and note-level. Instrument replacement: For each of the MIDI tracks a new instrument are considered. We first group the MIDI instrument indices (from 1 to 128) into ensembles (pianos, guitars, high-register strings, low-register strings, etc.), 6https://huggingface.co/datasets/amaai-lab/melodySim 7https://www.cs.cmu.edu/music/data/melody-identification/ 8https://huggingface.co/datasets/amaai-lab/melodySim 9https://huggingface.co/amaai-lab/MelodySimand then reassign the track instruments with the following rules: 1. with probability 0.2, retain the instrument as it is; 2. if not, then with probability 0.7, change the instrument to another one in its ensemble (e.g., replacing piano with e-piano); 3. otherwise replace the instrument with another one in a different ensemble with similar pitch register; 4. ensure coupled tracks (e.g., piano tracks) to be applied with the same replacement policy; 5. avoid different instrument tracks being replaced into the same instrument. Track removal: 1. with a probability drawn from a uniform distribution of [0.1, 0.5], for each track, mute the track; 2. with a probability of 0.5, mute the percussion track; 3. never mute the melody tracks (identified), bass tracks and other important tracks (vocals, piano or guitar companies). Note splitting: With a probability Pn, split the current note of typical duration (whole notes, half notes, quarter notes) into two of half the original duration. Pnis drawn from a uniform distribution of [0.3, 0.85] for each track n. Chord inversion: For each track, detect block chords (concurrently played notes) consisting of 3 or 4 notes. For each such chord, with a probability Pn, shift the top notes an octave down or the bottom notes an octave up. Pnis drawn from a uniform distribution of [0.3, 0.85] for each track n. Chord argpeggiation: For each track, detect block chords that are in regular durations (1x/2x/3x/4x of quar- ter note). With a probability Pn, split the chord into an arpeggio (consisting of equally-placed chord notes) with the same total duration as the original chord. Pnis drawn from a uniform distribution of [0.3, 0.85] for each track n. 3.3 Step 3 - Audio-level augmentations After augmenting the MIDI files, the resulting audio files are obtained by synthesizing with the Musyng soundfont. Then, a set of audio augmentations (as depicted in Fig- ure 1) is applied to further diversify the different versions, in particular: •Pitch shift: The audio track is pitch-shifted by a random integer of semitones in the range of [-4, 4]. •Time shift: The whole track is shifted by a random time from a range of [-3, 3] seconds. This time shift is used when matching the positive pairs later on. •Tempo change: The audio track’s tempo is altered by a random factor in the range of [0.9, 1.1]. The resulting audio files are then cut into 10 sec long segments each being saved with representative track name, version index, and segment index. 4. MUSIC SIMILARITY MODEL Using the newly created MelodySim, we train a triplet neural network model [22] that that enables the creation of melody-sensitive embeddings of music audio, and | https://arxiv.org/abs/2505.20979v1 |
the computation of the distance or similarity between these embeddings. MIDIMelody track identification1. Non -melody stem removal 2. Instrument replacement1. Note splitting 2. Chord inversion 3. Arpeggiation1. Pitch shift 2. Time shift 3. Segmenting 4. Tempo changeAudioNote -level Instrument -level SynthesizeAudio MIDI Augmentations Audio Augmentations(augmented )Figure 1 . The proposed melody-aware augmentation pipeline used for constructing MelodySim dataset by augmenting Slakh MIDI. anchor positive negativeMERT Encoder MERT Encoder MERT Encoder1D Resnet 1D Resnet 1D ResnetClassifier𝐱anc 𝐱pos 𝐱negsgabs 𝐱anc−𝐱pos sgabs 𝐱anc−𝐱neg Classifier𝐲same 𝐲diff Training shared parameters shared parametersshared parameters waveform inputs MERT embedding sequences melody -aware embeddings sigmoid distance / similarity segment 1 segment 2MERT Encoder MERT Encoder1D Resnet 1D Resnet𝐱1 𝐱2Classifier abs 𝐱1−𝐱2𝐲 similarity =1−𝐲 Inferenceℒtriplet 𝐱anc,𝐱pos,𝐱neg ℒBCE 𝐲same ,0+ℒBCE 𝐲diff,1 shared parameters shared parameters Figure 2 . The proposed architecture for training and inference. sg[·]means "stop gradient" and abs(·)notates element-wise absolute function. 4.1 Triplet dataset To train a triplet neural network model, we reformulate MelodySim into triplets, consisting of an anchor sample, a positive sample similar to the anchor, and a negative sam- ple dissimilar to the anchor. We construct the positive pairs by combining time-aligned segments from the original and augmented tracks. The negative pairs are formed using inter-song segments. The example below illustrates a triplet (anchor ,positive ,negative )structure: anchor =Track 00125/version 0/segment02, positive =Track 00125/original /segment02, negative =Track 00007/version 2/segment12. Each triplet consists of an anchor data sample, a pos- itive data sample that shares the same melody but varies in other characteristics (such as texture, tempo, or instru- mentation), and a negative data sample that differs in both melody and other features. This triplet construction en- sures the model can learn to differentiate between similar and dissimilar musical excerpts based on melody.4.2 Triplet Neural Network As shown in Figure 2, the music similarity model is a triplet neural network (TNN) consisting of a MERT en- coder, a ResNet backbone and a classifier head. The similarity model starts with a MERT encoder, a pretrained state of the art model open source on hugging- face by [11]. For capacity limitation reasons the audio files were fed into the more compact MERT-v1-95M version of the feature extractor and stored as encodings, before using them as input for the adaption network in the training pro- cess. In order to reduce memory load the output features of MERT were postprocessed with a moving average with size=10, stride=10 over the time token axis and the selected hidden states were limited to h3, h6, h9, h12. After MERT encoding, a sequence of trainable 1D con- volutional residual blocks is applied as an adaption net- work. An average pooling layer is applied at the end of 1D Resnet to aggregate the information over time dimension, getting a fixed dimension embedding for its correspond- ing MERT embedding sequence. In a training step, all three components in a triplet (anchor ,positive ,negative ) run through the MERT encoder (parameters frozen) and the 1D Resnet (parameters trainable), getting the corre- sponding embeddings, i.e., xanc,xpos,xneg. To integrate melody-aware information, we update the 1D Resnet pa- rameters through backward propagation | https://arxiv.org/abs/2505.20979v1 |
with triplet loss, which is defined as follows: Ltriplet (xanc,xpos,xneg) = max d(xanc,xpos)−d(xanc,xneg) +α,0 where α= 1.0is the margin, d(xi, yi) =∥xi−yi∥2is the Euclidean distance. Finally, a fully-connected classifier is maintained at the end in measuring the sigmoid distance between embed- dings, with output scaling in range [0,1]. In each triplet, we inspect a "same case", namely (xanc,xpos)and a "different case" (xanc,xneg). The classifier takes abs(xanc−xpos) andabs(xanc−xneg)as inputs, giving ysame,ydiffas outputs respectively. To train the classifier, we backward propagate the Binary Cross Entropy (BCE) loss LBCE(ysame,0) +LBCE(ydiff,1) = mean(log(1 −ysame) + log ydiff) In this way, we train the classifier with balanced labels. In addition, we remove the gradient of classifier inputs (i.e., abs(xanc−xpos)andabs(xanc−xneg)) during training to avoid BCE loss confusing the 1D Resnet. In Figure 2 we usesg[·]to show the "stop gradient" operation. During inference, we utilize the similarity model as a Siamese Neural Network: forwarding both input audio segments with MERT encoder and 1D Resnet respectively in the same manner, getting the absolute difference and fi- nally get the classification result. Note that the final output of the inference pipeline is similarity (also falls in range [0,1]) instead of sigmoid distance. 4.3 Plagiarism identification Note that the TNN mentioned in the previous section com- putes similarity between two music segments. However, it remains a problem to decide whether or not two entire pieces are plagiarized. In view of this, we compute the similarity matrix and design a rule-based decision strategy. Given two pieces, we segment them into 10-sec win- dows [w(i) 1, i= 1, ..., N 1],[w(j) 2, j= 1, ..., N 2]in the same way as when constructing the MelodySim dataset. If we notate the similarity model (inference mode) with sij=f(w(i) 1,w(j) 2) we can finally get a similarity matrix S= [sij]∈[0,1]N1×N2 Next, we threshold (default γ= 0.5) the similarity ma- trix, getting a decision matrix D=u(S−γ) where u(·)is the unit step function. Summing up rows or columns of D, we have plagia- rized counts in both directions, namely d(i) 1→2=X jDij,d(j) 2→1=X iDijIf we further define a sensitivity (how many similar seg- ments in piece 2 are enough to determine that the segment in piece 1 is plagiarized and vice versa), we can finally ob- tain the proportion of plagiarized segments in d1→2,d2→1. In our testing cases, we set the maximum proportion to be 0.2, which means that "if both pieces have number of pla- giarized segments larger than 20% of the total segments, then the two pieces have plagiarism relationship". 5. EXPERIMENTS 5.1 Experimental setup The similarity model was trained on 95% of the MelodySim dataset, reserving the remaining 5% for eval- uation purposes. The training process was executed on a single Nvidia V100 GPU for a duration of 7.3 hours, with a batch size of 512. During the training phase, all MelodySim training tracks were traversed, and anchors were randomly selected along with their corresponding positive and negative samples. To enhance diversity, each track was loaded 4 times per epoch. The training regimen encompassed a total of 797 epochs. To thoroughly test the model, we utilize | https://arxiv.org/abs/2505.20979v1 |
the 78 pieces from the test split. Specifically, we construct 546 = 7 ×78 positive pairs , where the factor 7comes from all combi- nations among versions along with self-comparison, i.e., {(orig,orig),(orig,ver1) ,(orig,ver2) , ...,(ver2 ,ver3)}. Correspondingly, we select an equal number of negative pairs to maintain a balanced test set. These negative pairs are formed by combining excerpts from different tracks and randomly sampling from all possible combinations. 5.2 Objective evaluation We performed objective evaluation of the melody- similarity classifier that detects positive pairs in the dataset. We present the similarity matrices between selected exam- ples and report the classification metrics on the test set. A selection of similarity matrices is depicted in Figure 3. Table 1 shows the classification results on the test set. Table 1 . Classification metrics on test split. Precision Recall F1 Different 1.00 0.94 0.97 Similar 0.94 1.00 0.97 Average 0.97 0.97 0.97 Accuracy 0.97 The similarity matrix reveals that the model effectively captures melodic similarity, accurately reflecting the prox- imity between music audio segments. However, we no- tice that the positive pairs tend to have large-scale acti- vations like Figure 3 shows. This shows that the model may not only be sensitive in melody, but also the music texture (if the model is only sensitive in melody, then the positive pair similarity matrix should present the repeating pattern). In addition, some of the negative pair similarity Track01880 -version 3Track01976 -version 1Track01889 -originalTrack01889 -version2 differentsimilar Figure 3 . Examples of similarity matrices, a positive pair (top) and a negative pair (bottom) from the test split are demonstrated. matrices shows horizontal or vertical activations, meaning that "one or several adjacent segments in piece 1 may be similar to all windows in piece 2", which is not likely in real case. This reflects to some inherent problem on the similarity model as a black box. Observed from the classi- fication metrics, we would say that our model fits well on MelodySim, reaching 97% accuracy as well as F1 score, which indicates that the detection on positive pairs and negative pairs is balanced. 5.3 Subjective evaluation of dataset To assess the efficiency of our MelodySim dataset, we con- ducted a listening study. A total of 12 participants listened to 12 audio pairs and rated the overall similarity, melodic similarity, and similarity of non-melodic aspects on a 7- point Likert scale [23]. The results, depicted in Table 2, confirm that the proposed augmentations mainly alter non- melodic aspects of the music.Table 2 . MelodySim dataset listening study results; as- pects are rated on a 7-point Likert Scale; reported Mean Opinion Score with 95% Confidence interval. Aspect Positive pairs Negative pairs Overall similarity 4.23±0.80 2 .00±0.68 Melodic similarity 4.53±0.84 1 .90±0.90 Non-melodic similarity 3.94±0.53 2 .27±0.22 6. DISCUSSION AND LIMITATIONS The task of targeted augmentation to preserve melody but alter other attributes is not simple due to a few factors. First, identifying melody is not always straightforward, as some files include multiple melody tracks, or have melody being played in some parts of the song by an otherwise non-melodic track. This | https://arxiv.org/abs/2505.20979v1 |
makes it difficult to craft a sim- ple rule for melody identification, which could sometimes result, for instance, in a part of the melody missing, or a non-melody track being treated as a melody track, thus being always present after passing through the augmenta- tion pipeline. Furthermore, melody identification rules can be genre-dependent. In this paper, we offer a good base- line melody identification model, which can be further im- proved in future work. When constructing positive and negative pairs, we did not consider the possibility of pairing two segments from the same song at different time marks. The probability of a repeating motive in the same song is too high and would require a similarity metric to automatically identify such similar segments. However, using segments of the same song as either positive pairs (with matching melody, but slightly varied background, for instance, when the song culminates vs when it starts), or as negative pairs (when the melody played is different, e.g., verse vs chorus), would benefit the training of the similarity model further. Future work will focus on further augmentation im- provement, and include more analyses, potentially with real-life plagiarism cases. 7. CONCLUSION We present the MelodySim dataset, an open source audio dataset and model for melody-aware music similarity and plagiarism detection. MelodySim was constructed through a set of targeted midi and audio augmentations such that it contains original tracks as well as three variations that have a comparable melody, but vary in terms of other musical aspects (arpeggiated chords, changed instruments, missing tracks, etc.). The similarity in terms of melody and other musical aspects was verified through a listening study. We also propose a melody-aware similarity model. This model consists of a MERT-encoder, combined with a ResNet backbone and classification head. We employ a triplet neural network architecture for training the model on the MelodySim dataset. In an objective evaluation, we show that the model performs admirably in detecting vari- ations of songs in the test set. 8. ACKNOWLEDGMENT This work has received support from SUTD’s Kickstart Initiative under grant number SKI 2021 04 06 and MOE under grant number MOE-T2EP20124-0014. 9. REFERENCES [1] J. Melechovsky, Z. Guo, D. Ghosal, N. Majumder, D. Herremans, and S. Poria, “Mustango: Toward con- trollable text-to-music generation,” in Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (Volume 1: Long Papers) , 2024, pp. 8286–8309. [2] J. Copet, F. Kreuk, I. Gat, T. Remez, D. Kant, G. Syn- naeve, Y . Adi, and A. Défossez, “Simple and control- lable music generation,” Advances in Neural Informa- tion Processing Systems , vol. 36, pp. 47 704–47 720, 2023. [3] M. Wei, M. Modrzejewski, A. Sivaraman, and D. Her- remans, “Prevailing research areas for music ai in the era of foundation models,” arXiv preprint arXiv:2409.09378 , 2024. [4] G. Somepalli, V . Singla, M. Goldblum, J. Geiping, and T. Goldstein, “Diffusion art or digital forgery? inves- tigating data replication in diffusion models,” in Proc. of the IEEE/CVF Conf. on computer vision and pattern recognition , 2023, | https://arxiv.org/abs/2505.20979v1 |
pp. 6048–6058. [5] N. Carlini, J. Hayes, M. Nasr, M. Jagielski, V . Sehwag, F. Tramer, B. Balle, D. Ippolito, and E. Wallace, “Ex- tracting training data from diffusion models,” in 32nd USENIX Security Symposium , 2023, pp. 5253–5270. [6] B. L. Sturm, M. Iglesias, O. Ben-Tal, M. Miron, and E. Gómez, “Artificial intelligence and music: open questions of copyright law and engineering praxis,” in Arts, vol. 8, no. 3. MDPI, 2019, p. 115. [7] J. Huber, D. Müllensiefen, and R. Kopiez, “V on der" armseligen allerweltsfloskel" zur" schöpferischen eige- nart": Eine analyse deutscher gerichtsentscheidungen zu plagiaten in der musik von 1966 bis 2020,” Ph.D. dissertation, Hochschule für Musik, Theater und Me- dien Hannover, 2020. [8] F. Karsdorp, P. van Kranenburg, and E. Manjavacas, “Learning similarity metrics for melody retrieval,” in Proc. of the 20th Int. Society for Music Information Re- trieval Conf. , 2019, pp. 478–485. [9] K. Park, S. Baek, J. Jeon, and Y .-S. Jeong, “Music pla- giarism detection based on siamese cnn,” Hum.-Cent. Comput. Inf. Sci , vol. 12, pp. 12–38, 2022. [10] S. Park, H. Kim, J. Pak, and J. Kim, “Quantitative anal- ysis of melodic similarity in music copyright infringe- ment cases,” in International Society for Music Infor- mation Retrieval Conference . International Society for Music Information Retrieval, 2024.[11] Y . Li, R. Yuan, G. Zhang, Y . Ma, X. Chen, H. Yin, C. Xiao, C. Lin, A. Ragni, E. Benetos et al. , “Mert: Acoustic music understanding model with large-scale self-supervised training,” arXiv:2306.00107 , 2023. [12] G. Kasif and G. Thondilege, “Exploring music simi- larity through siamese cnns using triplet loss on music samples,” in 2023 Int. Research Conf. on Smart Com- puting and Systems Engineering (SCSE) , vol. 6. IEEE, 2023, pp. 1–8. [13] P. Knees and M. Schedl, “A survey of music similarity and recommendation from music context data,” ACM Transactions on Multimedia Computing, Communica- tions, and Applications (TOMM) , vol. 10, no. 1, pp. 1–21, 2013. [14] P. Van Kranenburg, M. de Bruin, L. P. Grijp, and F. Wiering, “The meertens tune collections,” Meertens Online Reports , vol. 2014, no. 1, 2014. [15] Z. Yin, F. Reuben, S. Stepney, and T. Collins, ““a good algorithm does not steal–it imitates”: The originality report as a means of measuring when a music gener- ation algorithm copies too much,” in Artificial Intel- ligence in Music, Sound, Art and Design: 10th Int. Conf., EvoMUSART 2021, Part of EvoStar . Springer, 2021, pp. 360–375. [16] T. Bertin-Mahieux, D. P. Ellis, B. Whitman, and P. Lamere, “The million song dataset.” in Ismir , vol. 2, no. 9, 2011, p. 10. [17] N. Borkar, S. Patre, R. S. Khalsa, R. Kawale, and P. Chakurkar, “Music plagiarism detection using audio fingerprinting and segment matching,” in 2021 Smart Technologies, Communication and Robotics (STCR) . IEEE, 2021, pp. 1–4. [18] A. López-García, B. Martínez-Rodríguez, and V . Liern, “A proposal to compare the similarity be- tween musical products. one more step for automated plagiarism detection?” in Int. Conf. on Mathematics and Computation in Music . Springer, 2022, pp. 192–204. [19] R. Batlle-Roca, | https://arxiv.org/abs/2505.20979v1 |
W.-H. Liao, X. Serra, Y . Mitsufuji, and E. Gómez Gutiérrez, “Towards assessing data replica- tion in music generation with music similarity metrics on raw audio,” 2024. [20] E. Manilow, G. Wichern, P. Seetharaman, and J. Le Roux, “Cutting music source separation some slakh: A dataset to study the impact of training data quality and quantity,” in 2019 IEEE Workshop on Ap- plications of Signal Processing to Audio and Acoustics (WASPAA) . IEEE, 2019, pp. 45–49. [21] Z. Jiang and R. B. Dannenberg, “Melody track identifi- cation in music symbolic files.” in 16th Sound and Mu- sic Computing Conf. (SMC), Málaga, Spain , 05 2019. [22] E. Hoffer and N. Ailon, “Deep metric learning us- ing triplet network,” in Similarity-based pattern recog- nition: third international workshop, SIMBAD 2015, Copenhagen, Denmark, October 12-14, 2015. Pro- ceedings 3 . Springer, 2015, pp. 84–92. [23] A. Joshi, S. Kale, S. Chandel, and D. K. Pal, “Likert scale: Explored and explained,” British journal of ap- plied science & technology , vol. 7, no. 4, p. 396, 2015. | https://arxiv.org/abs/2505.20979v1 |
arXiv:2505.20993v1 [cs.CL] 27 May 2025Who Reasons in the Large Language Models? Jie Shao Jianxin Wu∗ National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China [email protected], [email protected] Abstract Despite the impressive performance of large language models (LLMs), the pro- cess of endowing them with new capabilities—such as mathematical reasoning— remains largely empirical and opaque. A critical open question is whether reasoning abilities stem from the entire model, specific modules, or are merely artifacts of overfitting. In this work, we hypothesize that the reasoning capabilities in well- trained LLMs are primarily attributed to the output projection module ( o_proj ) in the Transformer’s multi-head self-attention (MHSA) module. To support this hypothesis, we introduce Stethoscope for Networks (SfN), a suite of diagnostic tools designed to probe and analyze the internal behaviors of LLMs. Using SfN, we provide both circumstantial and empirical evidence suggesting that o_proj plays a central role in enabling reasoning, whereas other modules contribute more to fluent dialogue. These findings offer a new perspective on LLM interpretability and open avenues for more targeted training strategies, potentially enabling more efficient and specialized LLMs. 1 Introduction Although large language models (LLMs) [ 29,6,41,5] have exhibited great success and potential in various aspects, developing new capabilities for LLMs [ 53,17,37,14] is still a trial and error experimentation process in most cases. For example, one of the most exciting milestones is LLMs that can reason [ 18,13,39], e.g., solving complicated mathematical problems using a reasoning sequence that is agreeable by human experts. This success, however, is still in the black-box style. Currently, there are two primary approaches to inspiring reasoning capabilities in LLMs. For the most advanced models [ 13,51], reinforcement learning method (for example, PPO [ 36], DPO [ 30], or GRPO [ 37]) is commonly adopted to enhance the model’s ability to solve complex mathematical or programming problems in a step-by-step manner [ 48]. A more efficient alternative involves supervised fine-tuning (SFT): by providing the backbone LLM with well-prepared, diverse, and step-by-step reasoning traces—often generated through handcrafted examples or existing reasoning models [ 54,25,13,51]—the model surprisingly acquires reasoning abilities after training. However, despite the practical success of this method, the underlying mechanism remains largely unexplained. It is still unclear why or how this ability emerges. Several potential explanations may account for this phenomenon: Case 1 Is it the LLM in its entirety (i.e., the union of all its weights) that leads to this capability, such that this miracle is not explainable? Case 2 Or, is there certain module(s) in it that should be praised for this success, such that we can advance our understanding of LLMs? ∗Corresponding author. Preprint. Under review. Chat: {q, k, v}_proj & MLP Reason: o_projStethoscope for Networks 🩺 1. The Delta Stethoscope2. The Merge Stethoscope3. The Freeze Stethoscope Base Model Reasoning Model 💯minus ❓ ❓ Base Model 🧊 🔥SFT Reasoning Model Reasoning Model 💯 Base Model Merged Model merge4. The Destruction Stethoscope chat chat LLMChatReasonConjecture: Figure 1: Stethoscope for Networks. SfN is a framework designed to identify which components of an LLM give | https://arxiv.org/abs/2505.20993v1 |
rise to specific abilities. By comparing weight changes and observing behaviors under controlled module merging, tuning, or destruction, SfN provides interpretable insights into the origin of capabilities like reasoning. Case 3 Or in the worst scenario, is reasoning an illusion (e.g., by overfitting to certain types of data), such that we have overestimated the potentials of LLMs? A definitive answer to any of the above questions will be extremely valuable to guiding the future direction of LLM research. Even a hypothesis or conjecture supported by circumstantial evidences will be highly enlightening, too, let alone when convincing empirical evidences are available. To this end, our hypothesis is that Case 2 holds in LLMs that reason well. To be more precise, we hypothesize that it is the output projection’s parameters ( o_proj ) in the Transformer [ 43]’s multi-head self-attention (MHSA) module that is in charge of reasoning in an LLM. To support our hypothesis, we propose a few techniques for diagnosing LLM’s behaviors, in particular, the potential functionalities and impacts of various modules in it. We call these techniques Stethoscope for Networks, or SfN (summarized and illustrated in Figure 1). Starting from reasoning-enhanced models, we argue that the weight differences between a base LLM and its fine-tuned counterpart (e.g., for reasoning tasks) provide firsthand and crucial evidence for understanding internal changes. We refer to this approach as the Delta Stethoscope. In addition, we introduce two novel and previously unexplored methods within the SfN framework: the Merge Stethoscope and the Destruction Stethoscope. The Merge Stethoscope replaces specific modules in a base model with those from a reasoning model. Surprisingly, the resulting variant can maintain fluent dialogue and demonstrate improved reasoning ability in some cases. This phenomenon offers strong clues about the origin and localization of reasoning capability in LLMs. The Destruction Stethoscope, in contrast, systematically disables individual modules and observes the resulting behavior to infer the functional roles of each component. We also propose the Freeze Stethoscope, which selectively freezes parts of the model during fine-tuning. By controlling which modules are updated, we provide convincing empirical support for earlier insights and clues into the localization of reasoning within LLMs. With different gadgets we propose in SfN, we provide not only sanity check level tests for our hypothesis, but also more convincing circumstantial supports and even direct empirical evidences. In short, the contributions in this paper are two-fold: •With various diagnosis evidence (SfN), we are confident in hypothesizing that the output projection o_proj is mainly responsible for the reasoning in LLMs. The impact of this finding include not only potential ways to improve LLM that reasons (e.g., training much faster), but may generalize to produce better LLMs for other tasks (e.g., for a vertical LLM designed specifically for a domain). Our further conjecture is that other modules combined together lead to lucid conversations, but o_proj is less important in conversational ability. •The proposed Stethoscope for Networks (SfN) gadgets are a set of tools that are useful in understanding modern LLMs and even other networks, which have the potential to enhance our understanding of LLM or | https://arxiv.org/abs/2505.20993v1 |
deep neural network and may lead to alternative routes for further deep learning research. 2 2 Key Hypothesis: Output Projection is the Key for Reasoning To present our findings, we start by introducing necessary background information and notations, while discussions on related work are deferred to Section 5. Modern LLMs [ 41,5,29] mostly consist of many Transformer blocks. A Transformer [ 43] block is composed of a multi-head self-attention (MHSA) module and a multi-layer perceptron (MLP) module. Components in MHSA include various projections, such as those for computing Q, K and V , denoted as q_proj ,k_proj , and v_proj , respectively. The output projection ( o_proj ) produces MHSA’s output. Components in the MLP are mainly linear projections: up, down, and gate [ 16,41,5] projections, denoted as up_proj ,down_proj , and gate_proj , respectively. The computation process is defined as: xattn=wo Softmax(wqx)(wkx)⊤ √ d (wvx) xmlp=wdown σ(wgatex)⊙(wupx)(1) For simplicity, we omit residual connections and present the computation at the token level, without using matrix or vectorized notation. Other essential components not explicitly included in equa- tion 1 include rotary positional embeddings (RoPE)[ 38], input embeddings ( embed_tokens ), layer normalization[4] ( layernorm ), and the language modeling head ( lm_head ). LetAbe an LLM with weak or no reasoning ability. By carefully procuring a dataset of reasoning examples [ 13,25,51], one can cleanse and improve the quality of the dataset into the training data D, and then finetune the existing model Aby using techniques such as SFT. The resulting LLM, model B, exhibits strong reasoning capabilities. For example, in commonly adopted practices, the base LLM Ais typically a widely used open-source model such as Qwen2.5-Math-1.5B, 7B or Qwen2.5-14B, 32B [ 52]. The reasoning model Bdenotes a publicly available reasoning-enhanced variant, such as DeepSeek-R1-Distill-Qwen-1.5B, 7B, 14B, 32B [ 13], which comes with a clearly specified base model and well-documented training procedure. Models that are either not open-sourced [ 13,39], or open-sourced without sufficient training details [ 40] or access to the base model [ 51], are not discussed in this paper. 2.1 The Delta Stethoscope In the above scenario, it is obvious that AandBshare exactly the same network architecture and structure, with their sole difference being the weights (parameters) inside various components. Suppose w(A)(w(B)) denotes the set of weights for all modules in A(B). Then, it is natural to conclude that to understand the difference between AandB(i.e., reasoning or not), we should focus on the difference between w(A)andw(B). Hence, we propose our first Stethoscope for Network. Assumption 1 (The Delta Stethoscope) Suppose AandBare two LLMs with weak and strong reasoning ability, respectively, and Bis obtained by finetuning from A. Then w(B)−w(A)contains essential information if we want to pinpoint the source of the reasoning ability in B. For each component X(e.g. X=q_proj ), we compute the ℓ2norm of the weight difference, ∥wX(B)−wX(A)∥ℓ2, and visualize the results across all the blocks in Figure 2. For simplicity and due to space constraints, we present three representative comparisons: Ais Qwen2.5-Math-1.5B [ 53] or Qwen2.5-14B, 32B [ 52] and Bis DeepSeek-R1-Distill-Qwen-1.5B, 14B, 32B [ 13]. Additional results | https://arxiv.org/abs/2505.20993v1 |
for other model sizes (7B and 8B) are provided in the appendix and exhibit similar patterns. For the 1.5B models, the signal is less clear, but o_proj still exhibits a distinct pattern compared to q,k,v_proj —showing the largest change within the attention module and the second-largest across the entire model. As model size increases to 14B and 32B, this trend becomes more pronounced. In both cases, the most notable observation is that when X=o_proj , theℓ2norm is at least two times larger than any other component, indicating the substantial changes in this module during reasoning enhancement. In Figure 3, we further analyze the distribution of relative weight changeswX(B)−wX(A) wX(A)for each linear module. To improve clarity and visual appeal, we plot the distribution every 5 layers and clip values in the range [−1.0,1.0]to mitigate the influence of outliers. The vertical axis represents the 3 Qwen2.5-14B vs. DeepSeek-R1-Distill-Qwen-14BQwen2.5-32B vs. DeepSeek-R1-Distill-Qwen-32BQwen2.5-Math-1.5B vs. DeepSeek-R1-Distill-Qwen-1.5B q_proj k_proj v_proj o_proj up_proj gate_proj down_proj q_proj k_proj v_proj o_proj up_proj gate_proj down_proj q_proj k_proj v_proj o_proj up_proj gate_proj down_proj(Layer)Figure 2: Per-module L2 distance of linear weights between models AandB.Notably, the o_proj module shows the second-largest change in 1.5B models, and the largest in 14B and 32B models, highlighting its potential importance for reasoning. Similar trends are observed in 7B and 8B models (see appendix). Qwen2.5-32B vs. DeepSeek-R1-Distill-Qwen-32BQwen2.5-14B vs. DeepSeek-R1-Distill-Qwen-14B MLP.down_proj MLP.gate_proj MLP.up_proj self_attn.k_proj self_attn.o_proj self_attn.q_proj self_attn.v_projQwen-2.5-Math-1.5B vs. DeepSeek-R1-Distill-Qwen-1.5B Figure 3: Layer-wise distribution of relative weight changes between models AandB.While most modules display a unimodal distribution, the o_proj module uniquely exhibits a bimodal distribution, highlighting its distinctive behavior. Consistent patterns are observed across models of other sizes, with detailed results provided in the appendix. frequency. A striking and consistent finding is that all linear modules—except o_proj —exhibit a unimodal distribution centered around zero, whereas o_proj uniquely displays a clear bimodal pattern , highlighting its distinct role. Both observations hold consistently across model sizes and base models: o_proj exhibits the largest or second-largest weight shift, and the overall weight difference patterns remain strikingly similar. Therefore, it is reasonable to guess that the output projection o_proj plays a pivotal role in curating B’s reasoning ability. We are, however, not aware of o_proj ’s specific role: is it solely responsible for reasoning? Or, is it collaborating with another module(s)? Or, in the worst scenario, is this difference in ∥wX(B)−wX(A)∥ℓ2andwX(B)−wX(A) wX(A)coincidental? 4 Level I: NonsenseOutput lacks coherence or meaning.Q: Can you write a short paragraph about the importance of reading books? A1: [[[[[[ … / A2: the importance of course, the importance of course, of course, the importance of course, of course …Level II: IrrelevantGrammatically correct, but contextually off-topic.Q: Can you write a short paragraph about the importance of reading books? A: Can you provide me with some data?Level III: Context-AwareUnderstands the question, but may apply flawed reasoning.Q: Every morning, Aya … This morning, if she walks at s+1/2 kilometers per hour, how many minutes will the walk take? A: First, let's set up the equations … Simplify and solve: ; … Final answer: 12144s−st−2t=2524s−st60=9Level IV: Correct and Logical ReasoningFully understands the context and provides | https://arxiv.org/abs/2505.20993v1 |
a valid solution.Q: Every morning, Aya … This morning, if she walks at s+1/2 kilometers per hour, how many minutes will the walk take? A: First, the problem says that … Subtract: Then: Today: speed is 3 km/h, walk = 180 min, total = 180 + 24 = 204 minutes. 9(1s−1s+2)=1.6→s(s+2)=11.25→s=2.53.6+t60=4→t=24 ❌ ⚠ 🤔 ✅Q: Can you write a short paragraph about the importance of reading books? A: Absolutely! Reading is a crucial skill that can help students improve … and understand the implications of these skills. Figure 4: Four levels of responses generated by the LLM . From level I to level IV , the model exhibits stronger language organization and logical reasoning skills. Each example includes a question (e.g., a math problem from AIME or a typical user-issued request) and the corresponding response generated by the LLM. 2.2 The Merge Stethoscope We design another gadget, the Merge Stethoscope, to answer this question. Suppose an LLM Mis formed by merging models AandB, that is, Mhas the same structure as AandB, while a subset of its modules’ parameters come from Aand the rest from B. In a conversational or reasoning task, what will the output of Mlook like? We can imagine 4 levels of different output, as Level I A sequence of random or nonsense tokens. Level II A sequence that looks like normal sentences, but does not fit into the context of the task. Level III A sequence that is meaningful sentences that match the task’s context well but will fail to reason in difficult problems. Level IV A sequence that reasons—and reasons correctly in most cases. Figure 4 shows examples of level I to IV outputs. It is worth highlighting that Misrudely merged fromAandBwithout any further tuning . Hence, the intuitive conjecture will be that Mwill produce level I output (i.e., ushering meaningless tokens). However, if model M, when merged in a specific configuration, is capable of producing level IV outputs for questions that model Afails to solve, then the specially merged components are likely critical for reasoning. Assumption 2 (The Merge Stethoscope) Suppose Mis created by merging the output projection (o_proj ) weights of B(which has strong reasoning ability) and all other components of A(which is weak in reasoning), and further suppose that Mhas stronger reasoning ability compared to A. Then, we assume o_proj is crucial in achieving reasoning in LLMs. We attempt a minimal or atomic merge by replacing only the o_proj modules in model A= Qwen2.5-Math-1.5B [53] with that of model B=DeepSeek-R1-Distill-Qwen-1.5B [13], keeping all other components unchanged. Although we initially expected the resulting model to produce level I or level II outputs, the results turn out to be surprising. On the AIME 2024 benchmark [ 19], the merged model M1achieves level IV performance on several questions that model Acannot solve. As shown in Table 1, the merged model not only yields correct reasoning and answers, but also tends to generate longer and more detailed responses compared to A. In contrast, replacing other modules such as {q,k,v}_proj andmlpleads to performance degradation. For example, model M2, which replaces | https://arxiv.org/abs/2505.20993v1 |
{q,k,v}_proj , produces level III outputs, while M3, which replaces mlp, deteriorates to level I. Only replacing o_proj results in a correct reasoning process and a correct answer, as illustrated in Figure 5. This striking difference motivates our further investigation in Section 3. 5 ModelReplaced ModuleAIME 2024Average Tokens A(Q-1.5B) - 0.067 2421 M1 o_proj 0.200 5418 M2 {q,k,v}_proj 0.000 2058 M3 mlp 0.000 15532 B(D-1.5B) - 0.233 11892 Table 1: AIME 2024 accuracy of the base model, the reasoning model, and their merged variants. Each merged model is constructed by replacing specific modules in model Awith the corresponding module from model B. Q: Every morning, Aya does a 9 kilometer walk … if she walks at s+1/2 kilometers per hour, how many minutes will the walk take?: To solve this problem, we need to determine … So, the walk will take 204 minutes, including the 24 minutes at the coffee shop. The final answer is 204.M1: To solve this problem … output 12.0000000000000. The output indicates that the time taken for the walk is 12 minutes. So, the final answer is 12.M2: … walking speeds increase speeds faster walking speeds increase walking speeds faster walking speeds faster walking …M3Figure 5: Examples of outputs generated by merged models. OnlyM1produces both a valid reasoning process and the correct answer. These results clearly show that the merged model Mhas a stronger reasoning capacity than A, despite thatMis sutured from two completely different models and has never being finetuned. Now we feel confident in our assumption that o_proj is the key component responsible for reasoning in LLMs. 2.3 The Freeze Stethoscope As models AandBscale up (e.g., to 7B parameters), merging components such as q,k,v_proj or mlpstill results in significant performance degradation. However, unfortunately, merging o_proj no longer brings notable improvements in solving complex mathematical problems—although it does not harm accuracy, and still increases the generated output length. Our analysis of ||wX(B)−wX(A)||ℓ2suggests that this is due to a substantial mismatch in normal- ization parameters (that is, layernorm modules) between AandBat larger scales, compared to smaller models (e.g. 1.5B). Even when we merge both o_proj andlayernorm parameters from B, the resulting model Mstill fails to reason effectively, probably because the remaining parameters of Aare incompatible with the normalization parameters of B. To investigate this hypothesis in larger LLMs, we introduce the Freeze Stethoscope. Assumption 3 (The Freeze Stethoscope) Suppose that an LLM Fis obtained by supervised fine- tuning using the dataset D.Fis initialized from A, and both o_proj and normalization components are tuned while other components are frozen. If Fexhibits strong reasoning ability, then we assume thato_proj is crucial in achieving reasoning in LLMs even in large-scale models. It is worth noting that embed_tokens andlm_head are also tuned.2Normalization module pa- rameters are unfrozen by default. We adopt the pipeline of s1 [ 25] as our baseline, which uses the base model A=Qwen2.5-32B-Instruct and the dataset D=s1K containing 1,000 high-quality reasoning traces. The results are shown in Table 2, where our model F4corresponds to model Bin Assumption 3. We do notstrictly follow the training or testing setup of s1, | https://arxiv.org/abs/2505.20993v1 |
primarily due to limited computational resources and the lack of an exact testing recipe to reproduce the reported results. However, our objective is not to optimize accuracy via testing tricks or prompt tuning, but to highlight the effectiveness of o_proj tuning compared to full-parameter tuning. For fair comparison, we adopt the “Budget Forcing Wait 2x” setting from s1 and retain all configurations without hyperparameter tuning. Using this simplest possible experimental setup, Table 2 clearly shows that simply tuning o_proj andlayernorm (model F2)) leads to strong reasoning ability, while at the same time only tuning layernorm (model F1) harms the reasoning of the LLM. Further unfreezing the parameters of {q,k,v}_proj (model F3) yields little additional gain or even negative impact. The training loss curves are shown in Figure 6. When all parameters including MLP are unfrozen, the model exhibits clear signs of overfitting, likely using the large MLP capacity to memorize the training set. In contrast, tuning only o_proj yields a smoother and more stable curve. Combined 2Without tuning these components, finetuning failed to converge. 6 0.10.20.30.4 050100150200250300Loss Steps0.10.20.30.4 050100150200250300Loss Steps32B14BF1F2F3F4Figure 6: Training loss curves for fine-tuning Qwen2.5-14B,32B-Instruct on reasoning tasks. Different models unfreeze different sets of parameters, as detailed in Table 2. Model Fintuned Modules #Param (B) Steps/s AIME 2024 Math 500 GPQA Diamond A(Q-32B) - - - 0.167 0.836 0.485 F1 Emb + Head 1.5 0.055 0.200 0.756 0.444 F2 Emb + Head + o_proj 3.2 0.052 0.367 0.890 0.520 F3 Emb + Head + {q,k,v,o}_proj 5.6 0.044 0.300 0.886 0.525 F4(B) All 32.8 0.015 0.367 0.906 0.591 A(Q-14B) - - - 0.133 0.810 0.449 F1 Emb + Head 1.5 0.106 0.133 0.722 0.414 F2 Emb + Head + o_proj 2.8 0.099 0.266 0.848 0.485 F3 Emb + Head + {q,k,v,o}_proj 3.7 0.081 0.233 0.854 0.490 F4(B) All 14.7 0.053 0.266 0.872 0.530 Table 2: Reasoning performance of different fine-tuning strategies on Qwen2.5-{14B, 32B}- Instruct. Emb denotes embed_tokens , Head denotes lm_head , and Attn denotes the entire MHSA. #Param refers to the number of trainable parameters, Steps/s indicates training speed, and the last three columns report commonly used metrics for evaluating reasoning models. with its competitive performance, this suggests that the model learns to reason rather than simply memorize. Hence, we are now prepared and feel supported to propose our key hypothesis: Hypothesis 1 (Outstanding Output Projection) In an LLM that reasons well, we hypothesize that the output projection ( o_proj ) component is the single or at least the most important module that dominates its reasoning ability. With carefully chosen tuning strategy and hyperparameters, there is reason to believe that tuning only o_proj (+LN) can reach the level of model Bin terms of reasoning performance. And, beyond exhibiting reasoning abilities, Table 2 also shows that tuning only o_proj (+LN) has other significant advantages: e.g., significantly faster finetuning (3 times faster) and smaller GPU memory consumption. These advantages will become more established when larger LLMs are tuned. 3 Conjecture: Conversation Hinges on Other Modules but Not Output We are mainly concerned with two abilities of LLMs: conversation and reasoning, which | https://arxiv.org/abs/2505.20993v1 |
map to level III and IV in our categorization of LLM’s outputs, respectively. Our Hypothesis 1 is on reasoning, but are there one module or several modules accounting for lucid conversations? In this section, we further propose a new stethoscope to diagnose this question and raise our conjectures accordingly. 3.1 The Destruction Stethoscope Our previous stethoscopes follow a “constructive proof” style, while now we resort to the “proof by contradiction” style. If one module in an LLM is “destructed”, and the LLM can still produce level III conversation outputs, then we have good reasons to guess that this module is not important in conversational ability; while it is important if the LLM ceases to dialogue regularly. Assumption 4 (The Destruction Stethoscope) Suppose a module Xis destructed (i.e., its normal functionality is disabled by some destruction method) in an LLM A. We denote the resulting LLM as 7 Destruction MethodModuleOutput LevelDestruction MethodModuleOutput Level Zeroq_proj I ReInitq_proj I k_proj I k_proj I v_proj III v_proj II o_proj III o_proj III up_proj I up_proj I gate_proj I gate_proj I down_proj I down_proj I Remove - I Table 3: Output levels of different modules under the three destruction methods: Zero ,ReInit , andRemove .All experiments are based on Qwen2.5-32B with destruction applied to specific layers. D. Then, the fact that Dcontinues (or ceases to) produce level III output (meaningful sentences in the conversation’s context) indicates whether Xis important for conversational abilities or not. We propose 3 destructors to destroy a module: Zero Set all parameters within Xto 0. ReInit Re-initialize all parameters inside Xusing Gaussian random numbers (mean=0, std=0.02). Remove Remove the entire layer. TheZero destructor is often equivalent to setting the output activation of Xto zeros (e.g., in a linear module like o_proj ). We want to emphasize that ReInit incurs more serious damages to an LLM thanZero does. Zero may change activations to zero, but ReInit exerts random effects (i.e., noise) to LLM activations. What is more important, these random effects will act as input to the next Transformer block and the noise is quickly amplified. Hence, level I or II output is expected when X is destroyed (especially when reinitialized) in a large number of Transformer blocks. 3.2 Conjectures Concerning the Conversation Capability For model Qwen2.5-32B with 64 layers, we observe that destroying modules in early or late layers— where input and output representations are more sensitive—consistently yields level I outputs. To avoid this, we restrict destruction to blocks 5–30. This range is empirically chosen, as affecting more layers often causes all outputs to degrade to level I, making distinctions between modules impossible. The experimental results are presented in Table 3. Specifically, we destroy selected modules and analyze the corresponding output. The Remove destructor removes the transformer layers as a whole. Note that the results are not statistics computed in many different experiments—it only reflects the conversation illustrated in Figure 4, but we observed similar patterns for other conversations. Table 3 reveals distinct roles of modules in conversation. Notably, o_proj —crucial for reasoning— appears unimportant for conversation. In contrast, all MLP components ( | https://arxiv.org/abs/2505.20993v1 |
up_proj ,down_proj , gate_proj ) are essential. Within MHSA, q_proj andk_proj are important, while v_proj plays a minor role. Based on these (admittedly weaker) observations, we propose the following conjecture. Conjecture 1 (Division of Labor) Based on current observations, an LLM can be roughly divided as two sets of modules: output projection ( o_proj ) and all others, where o_proj is mainly responsi- ble for reasoning and other modules for conversation. Then, output projection plays a unique role if this conjecture holds. Hence, we further propose another conjecture for it. Conjecture 2 (Output Projection Plugin) With conversational capabilities provided by other (frozen) modules, output projections may act as a plugin. For example, one set of o_proj for reasoning, and another set of o_proj for migrating an LLM to a vertical domain. 8 4 Potential Implications and Applications This paper mainly diagnoses LLMs from a theoretical, highly abstract perspective. However, our hypothesis and conjectures can also have highly practical implications and applications as long as they are correct or at least partially hold. •Fast and better reasoning LLMs . By finetuning only o_proj , we can potentially find a better reasoning LLM with much faster training and much smaller GPU memory footprint. •Integrating non-reasoning and reasoning LLMs. There is a recent trend to integrate chatting and reasoning LLMs into one model [ 51]. When we finetune a base LLM into a reasoning one using the previous procedure, they only differ in o_proj ,layernorm , embed_tokens andlmhead , which occupy only 10% of model size. Hence, the two LLMs are easily loaded as one LLM with two sets of these module for different purposes. •Vertical LLMs . Similarly, when equipped with different output projection plugins, one may adeptly obtain vertical LLMs for different domains. •Understanding deep neural networks. The proposed Stethoscopes for Networks might be useful gadgets to understand other deep models, and new stethoscopes can be further developed. They will be potentially useful in diagnosing existing networks and even in providing alternative directions to future deep learning research. 5 Related Work Large Language Models. Modern LLMs such as GPT [ 29,6], LLaMA [ 41,42], Qwen [ 5,52], and other representative models [ 7,20] adopt an auto-regressive architecture and have demonstrated impressive capabilities across a wide range of natural language processing tasks, including question answering [ 32,22], summarization [ 26,27], and translation [ 50]. These models are typically trained on large-scale corpora using next-token prediction objectives, and their performance has been shown to scale with model size [ 21]. Further improvements in alignment and usability have been achieved through instruction tuning [ 28,9,46] and reinforcement learning from human feedback (RLHF) [8, 30], enabling more controllable and helpful dialogue generation. Reasoning Models. While LLMs exhibit emergent reasoning abilities [ 47], recent efforts have further enhanced these capabilities through fine-tuning and architectural modifications [ 35,55]. Chain-of-thought prompting [ 48] encourages intermediate reasoning steps, improving performance in arithmetic tasks, while self-consistency decoding [ 45] improves robustness by sampling multiple reasoning paths. Inspired by OpenAI’s o1[18], most advanced models now employ reinforcement learning [ 36,30] to generate long | https://arxiv.org/abs/2505.20993v1 |
reasoning traces with sparse rewards. This leads to significant improvements, particularly in complex math, code, and other professional domains [ 13,51]. Despite these advances, the origin and location of reasoning ability in LLMs remain underexplored. Interpretability of LLMs. Understanding the inner workings of LLMs has attracted growing interest. Prior efforts include attention visualization [ 44], probing [ 15], and model editing [ 24,34], with the aim of interpreting internal representations. Other studies decompose the behavior of the model into attribute functions to specific modules [ 11]. The "Physics of Language Models" series [ 1,2,3] investigates LLMs through controlled setups to reveal empirical and universal laws that dictate LLM behavior. However, these studies often exclude the most advanced models or focus on narrow, synthetic settings, offering limited insight into real-world models. Their findings provide little practical guidance for understanding reasoning in state-of-the-art models. 6 Conclusions This work investigates a fundamental question in understanding large language models (LLMs): Is there a component or several components that are responsible for achieving the reasoning ability in LLMs? If the answer is affirmative, which components are responsible for the improvement? We hypothesize that the output projection ( o_proj ) module plays a central role in enabling reasoning capabilities. To support this, we propose Stethoscope for Networks (SfN) , a diagnostic framework 9 that encompasses several probing techniques. Through the proposed Delta ,Merge ,Freeze , and Destruction stethoscopes, we observe consistent patterns indicating that o_proj is critical for reasoning, while other modules primarily support conversational fluency. These findings open new directions for efficient and modular LLM training. Our findings are primarily based on a limited set of model families and reasoning benchmarks, and may not generalize to all architectures or tasks. Some diagnostic results rely on qualitative assessments rather than statistical validation. Furthermore, while the role of o_proj is empirically highlighted, a theoretical understanding of its function in reasoning remains to be established. Acknowledgments and Disclosure of Funding This work was partly supported by the National Natural Science Foundation of China under Grant 62276123 JW proposed the assumptions (Stethoscopes for Networks), hypothesis and conjectures. JS started this line of research in our group, proposed the Zero destructor, and effectively supported our main findings with experimental results. JW and JS wrote the paper. We thank Ke Zhu for discussions. A Experimental Details We primarily utilize open-sourced models to conduct experiments in this work. Given that DeepSeek- R1 is one of the most widely adopted reasoning models, and its authors have released a series of distilled models based on R1 [ 13], including both the specified base and finetuned reasoning models, we adopt their configurations in our study. Specifically, we use the DeepSeek-R1-Distill-Qwen [ 13] models with sizes of 1.5B, 7B, 14B, 32B and 70B as our reasoning models, and select Qwen2.5- Math-1.5B, 7B [ 53], LLaMA3.1-8B [ 12], Qwen2.5-14B, 32B [ 52] or Llama-3.3-70B-Instruct [ 12] as base models. All models are loaded and run using the Transformers library [49]. Our evaluation framework is based on the lm-evaluation-harness package [ 10]. To accelerate inference, we use vLLM [ 23] | https://arxiv.org/abs/2505.20993v1 |
as the backend, which may slightly affect performance due to backend-specific optimizations. In the Merge Stethoscope experiments, we observe that the “chat” interface often generates irrelevant or nonsensical responses, while the “generate” interface produces coherent and contextually appropriate outputs. We suspect this discrepancy arises from misinterpreted system prompts. Therefore, we rely on the “generate” interface and implement a custom evaluation toolkit. For the Freeze Stethoscope experiments, we build on the codebase of s1[ 25]. We use a learning rate of 1e-5, weight decay of 1e-4, a batch size of 16, and train for 5 epochs. Due to hardware limitations (i.e., lack of access to 16 H100 GPUs), we leverage DeepSpeed[ 33] with ZeRO Stage 3[31] to enable efficient training. The base model used here is Qwen2.5-32B-Instruct[ 52]. Evaluation is again conducted with lm-evaluation-harness, following the modified pipeline by the authors of s1, which disables generation of the end-of-thinking token and optionally appends the string “Wait” to the reasoning trace to encourage model reflection. We adopt the Budget Forcing “Wait” ×2 as our default testing configuration. All visualization and inference experiments on 1.5B–14B models are conducted on a single NVIDIA A100 GPU. For training and evaluating 32B-70B models, we use a cluster of 8 NVIDIA A100 GPUs. Training typically takes around 6 hours, while testing on a single dataset usually requires about 2 hours. B More Experimental Results In the main paper, we present visualization results for the 1.5B, 14B, and 32B models. Here, we supplement those results by providing additional visualizations for the 7B, 8B, and 70B models. Following the Delta Stethoscope pipeline, we visualize both the absolute weight shift |wX(B)− wX(A)|ℓ2and the relative weight shiftwX(B)−wX(A) wX(A). The absolute weight shifts are shown in Figure 7, and the relative weight shifts are presented in Figure 8. The trends observed in the main paper remain consistent across these additional models. Notably, o_proj consistently exhibits the 10 Qwen2.5Math–7B vs. DeepSeek-R1-Distill-Qwen-7BLlama-3.1-8B vs. DeepSeek-R1-Distill-Llama-8BLlama3.3-70B-Instruct vs. DeepSeek-R1-Distill-Llama-70BFigure 7: Per-module L2 distance of linear weights between models AandB.Notably, the o_proj module shows the largest in 7B, 8B and 70B models, highlighting its potential importance for reasoning. Llama3.3-70B-Instruct vs. DeepSeek-R1-Distill-Llama-70BLlama-3.1-8B vs. DeepSeek-R1-Distill-Llama-8B MLP.down_projMLP.gate_projMLP.up_projself_attn.k_projself_attn.o_projself_attn.q_projself_attn.v_projQwen2.5Math–7B vs. DeepSeek-R1-Distill-Qwen-7B Figure 8: Layer-wise distribution of relative weight changes between models AandB.While most modules display a unimodal distribution, the o_proj module uniquely exhibits a bimodal distribution, highlighting its distinctive behavior. largest weight shift, with the effect being especially pronounced in the 70B model. Moreover, o_proj is the only module that displays a bimodal distribution in the relative weight shift. C Statistical Significance and Broader Impacts We report appropriate information regarding the statistical significance of our experiments. While we do not primarily focus on classical significance tests such as p-values, we provide multiple forms of empirical evidence—such as consistent module-specific weight shifts, response-level comparisons under controlled manipulations, and loss curves under different tuning strategies—that collectively establish the robustness of our findings. These analyses serve as a practical alternative to traditional error bars or confidence intervals and help substantiate our key claims. This research has both promising benefits and important risks to consider. On the positive side, | https://arxiv.org/abs/2505.20993v1 |
the proposed Stethoscope for Networks (SfN) framework provides a novel set of tools for interpreting 11 LLMs, especially by localizing specific capabilities—such as reasoning—to individual components like the output projection (o_proj). These tools may significantly improve our understanding of LLMs, enabling more transparent, modular, and efficient model development. For instance, if reasoning abilities can be enhanced by tuning a small subset of parameters, it could greatly reduce computational costs and increase accessibility for developing domain-specific or lightweight models. However, this line of work also carries potential risks. Precisely identifying and isolating reasoning- related components might lower the barrier for targeted manipulation, such as unauthorized transfer or removal of reasoning abilities across models. This could facilitate misuse scenarios, including capability extraction, tampering, or model theft. Furthermore, while the diagnostic methods proposed aim to support interpretability, there is a risk that they may be overinterpreted, leading to an inflated sense of model transparency that does not generalize across architectures or tasks. References [1]Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.1, knowledge storage and extraction. arXiv preprint arXiv:2309.14316 , 2023. [2]Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.2, knowledge manipula- tion. arXiv preprint arXiv:2309.14402 , 2023. [3] Zeyuan Allen-Zhu and Yuanzhi Li. Physics of language models: Part 3.3, knowledge capacity scaling laws. arXiv preprint arXiv:2404.05405 , 2024. [4]Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450 , 2016. [5]Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609 , 2023. [6]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. [7]Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Nina Mielke, Alec Radford, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 , 2022. [8]Paul F Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Advances in neural information processing systems , volume 30, 2017. [9]Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Xin Wang, Xingyu Yuan, Adams Yu, Sharan Narang, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 , 2022. [10] Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. The language model evaluation harness, 07 2024. [11] Mor Geva, Tal Schuster, and Jonathan Berant. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913 , 2021. [12] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [13] Daya Guo, Dejian Yang, Haowei | https://arxiv.org/abs/2505.20993v1 |
Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 12 [14] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Yu Wu, YK Li, et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196 , 2024. [15] John Hewitt and Christopher D Manning. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 4129–4138, 2019. [16] Weizhe Hua, Zihang Dai, Hanxiao Liu, and Quoc Le. Transformer quality in linear time. In International conference on machine learning , pages 9099–9117. PMLR, 2022. [17] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jia- jun Zhang, Bowen Yu, Keming Lu, et al. Qwen2. 5-coder technical report. arXiv preprint arXiv:2409.12186 , 2024. [18] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [19] Maxwell Jia. Aime 2024 dataset. https://huggingface.co/datasets/Maxwell-Jia/ AIME_2024 , 2024. [20] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. [21] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. [22] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics , 7:453–466, 2019. [23] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large lan- guage model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. [24] Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. Advances in neural information processing systems , 35:17359–17372, 2022. [25] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [26] Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. Abstractive text sum- marization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023 , 2016. [27] Shashi Narayan, Shay B Cohen, and Mirella Lapata. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. arXiv preprint arXiv:1808.08745 , 2018. [28] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela | https://arxiv.org/abs/2505.20993v1 |
Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155 , 2022. [29] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019. 13 [30] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [31] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimiza- tions toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis , pages 1–16. IEEE, 2020. [32] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 , 2016. [33] Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining , pages 3505–3506, 2020. [34] Gautam Reddy. The mechanistic basis of data dependence and abrupt learning in an in-context classification task. arXiv preprint arXiv:2312.03002 , 2023. [35] Timo Schick, Ananya Dwivedi-Yu, Roberta Raileanu, Saghar Hosseini, Murray Chadwick, Gau- rav Mishra, Siddharth Karamcheti, Neil Houlsby, Aravind Elangovan, Mike Lewis, et al. Tool- former: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 , 2023. [36] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [37] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [38] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing , 568:127063, 2024. [39] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [40] Qwen Team. QwQ-32B: Embracing the Power of Reinforcement Learning, March 2025. [41] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [42] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [43] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. [44] Jesse Vig and Yonatan Belinkov. Analyzing the structure of attention in a transformer language model. arXiv preprint arXiv:1906.04284 , 2019. [45] Xuezhi Wang, Jason | https://arxiv.org/abs/2505.20993v1 |
Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [46] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instruc- tions. arXiv preprint arXiv:2212.10560 , 2022. 14 [47] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 , 2022. [48] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 , 2022. [49] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the- art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 38–45, Online, October 2020. Association for Computational Linguistics. [50] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 , 2016. [51] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. Qwen3 technical report, 2025. [52] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [53] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. [54] Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild, 2025. [55] Denny Zhou, Dale Schuurmans, Xuezhi Wang, Ed Chi, and Quoc V Le. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625 , 2023. 15 | https://arxiv.org/abs/2505.20993v1 |
arXiv:2505.20997v1 [cs.LG] 27 May 2025BIPNN: L EARNING TO SOLVE BINARY INTEGER PROGRAMMING VIAHYPERGRAPH NEURAL NETWORKS Sen Bai Changchun University of Science and Technology, China [email protected] Yang Changchun University of Science and Technology, China [email protected] Xin Bai Huawei Technologies Co. Ltd China [email protected] Zhang Changchun University of Science and Technology, China [email protected] Jiang Changchun University of Science and Technology, China [email protected] May 28, 2025 ABSTRACT Binary (0-1) integer programming (BIP) is pivotal in scientific domains requiring discrete decision- making. As the advance of AI computing, recent works explore neural network-based solvers for integer linear programming (ILP) problems. Yet, they lack scalability for tackling nonlinear chal- lenges. To handle nonlinearities, state-of-the-art Branch-and-Cut solvers employ linear relaxations, leading to exponential growth in auxiliary variables and severe computation limitations. To over- come these limitations, we propose BIPNN (Binary Integer Programming Neural Network), an unsupervised learning framework to solve nonlinear BIP problems via hypergraph neural networks (HyperGNN). Specifically, (I)BIPNN reformulates BIPs-constrained, discrete, and nonlinear ( sin, log,exp) optimization problems-into unconstrained, differentiable, and polynomial loss functions. The reformulation stems from the observation of a precise one-to-one mapping between polynomial BIP objectives and hypergraph structures, enabling the unsupervised training of HyperGNN to op- timize BIP problems in an end-to-end manner. On this basis, (II)we propose a GPU-accelerated and continuous-annealing-enhanced training pipeline for BIPNN. The pipeline enables BIPNN to optimize large-scale nonlinear terms in BIPs fully in parallel via straightforward gradient descent, thus significantly reducing the training cost while ensuring the generation of discrete, high-quality solutions. Extensive experiments on synthetic and real-world datasets highlight the superiority of our approach. 1 Introduction For decades, binary integer programming (BIP)—a powerful mathematical tool characterized by discrete binary decision variables (0 or 1)—is of critical importance in numerous domains, such as operational optimization [ 1,2,3], quantum computing [ 4,5,6], computational biology [ 7,8], materials science and computational chemistry [ 9,10]. However, BIP is known to be NP-complete [11], making large-scale BIP instances computationally intractable. Along with AI computing shines in scientific discovery, the potential of neural network-based IP solvers has emerged in recent years. To address integer linear programming (ILP) problems, MIP-GNN [ 12] leverages graph neural networks (GNN) to improve the performance. Another GNN&GBDT-guided framework [ 13] for large-scale ILP problems can save up 99% of running time in achieving the same solution quality as SCIP [ 14], a leading IP solver. However, these neural network-based ILP solvers lack scalability for nonlinear BIPs. To handle nonlinearities, state-of-the-art Branch-and-Cut solvers (e.g., SCIP [ 15]) rely on linear relaxation, which introduces a number of auxiliary variables. Once linearized, these problems are solved using linear programming APREPRINT - M AY28, 2025 :H 1e 2e 3e 4e 5e 6.0 2.0 3.0 6.0 2.0 3.0 6.0 1.0 1e 2e 3e 4e 5e 6.0 2.0 3.0 1.0 6.0 2.0 3.0 6.0 1.0 6.0 1.0 :x Matrix Incidence 1e 2e 3e 4e 5e 1v 2v 3v 4v 1x 2x 3x 4x 6.0 2.0 3.0 1.0 ... ... ... ... Tensors Random Sigmoid 2x 1x 3x 4x 5 3 231 12 xx exx 4 43 21 1 2 8451.0 x xx | https://arxiv.org/abs/2505.20997v1 |
xx x 4 3 7183.1 231 2 1 xx x x 321 4 43 21 1 2 8451.0 xxx x xx xx x Loss PUBO min 1x 2x 4x 3x ion Reformulat Polynomial Problem BIP ion Reformulat ned Unconstrai 1 2 11x 12x 03x 04x HyperGNN optimize to Training Converge 4 43 21 1 2 ) (sin x xx xx x s.t. s.t. }1,0{ix }1,0{ix 4,3,2,1i 4,3,2,1i Hypergraph 1v 2v 2e 3e 5e 1e 3v 4v 4e ... ,, ,PUBO kjikjiijk jijiij iii xxxQ xxQ xQ O x 1 x ColMB PUBO TQH- H O Solutions : Framework BIPNN The Optimizer based- Network Neural : Workflow Training 4e HyperGNN ]1,0[ix Relaxation Output optimize Train to Loss PUBO loss PUBO d accelerate- GPU loss PUBO d accelerate- GPU : Example Output :H-1 1e 2e 3e 4e 5e 0 1 :TQ ColM 1eQ 2eQ 3eQ 4eQ 5eQ 0 1 1 1 0 0 0 0 0 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 solutions to Converge Optimizer based- Network Neural min min min min Annealing⨀ 1 1.0 :H- H 1 x B ⨀ :HBx ⨀ 12.0 03.0 04.0 a b c d Figure 1: The BIPNN framework. (LP) solvers (e.g., the Simplex method1). Consequently, large-scale nonlinear BIPs often suffer from prohibitive computational costs. As BIP solvers continue to evolve, linearization remains indispensable for making nonlinearities more tractable for BIP solvers. These limitations motivate us to develop a streamlined and general-purpose BIP solver to advance the state of the art. To profoundly adapt to real-world applications, our work grapples with challenges arising from neural networks’ unique characteristics beyond linearization-based methods, as summarized below: Challenge 1 . Meticulously modeling nonlinear terms in BIP objectives and constraints; Challenge 2 . Utilizing GPU’s parallel computing capability. To this end, in this work we propose BIPNN (Binary Integer Programming Neural Network), an unsupervised BIP solver that bridges the gap between nonlinear BIP and deep neural networks. Our overarching idea stems from the observation of one-to-one mapping correspondence between polynomial BIP objectives and hypergraph structures (upper right of Fig. 1). As depicted in Fig. 1, our framework consists of three phases: 1) In the first phase, we employ broadly applicable penalty term method to convert constrained BIP problems into polynomial unconstrained binary optimization (PUBO2) formalism. To handle exponential and trigonometric terms, we propose a novel transformation to represent them in the form of polynomials. These refined polynomial objectives are adaptable to neural network-based solvers when applied as loss functions. 1To be precise, the Simplex method is designed to solve linear programming (LP) problems in polynomial time, meaning they belong to the class P [16]. 2The mathematical formulation PUBO is well-known in quantum computing, for modeling complex optimization problems in a way | https://arxiv.org/abs/2505.20997v1 |
quantum computers may solve efficiently. 2 APREPRINT - M AY28, 2025 2) In the second phase, we leverage hypergraph neural networks (HyperGNN) to address Challenge 1 , capturing high-order correlations between binary decision variables, or in other words the polynomial terms in the refined PUBO objective. By applying a relaxation strategy to the PUBO objective to generate a differentiable loss function with which we train the HyperGNN in an unsupervised manner. 3) Nevertheless, when we train these HyperGNNs to minimize the PUBO objectives, we encounter severe obstacles of low computational efficiency in these polynomial losses with numerous variables. In the third phase, leveraging GPUs, we further propose an algorithm to address Challenge 2 via matrix operations on the incidence matrices of hypergraphs. In summary, we contribute: 1) BIPNN, an unsupervised HyperGNN-based solver that allows learning approximate BIP solutions in an end-to-end differentiable way with strong empirical performance. 2) An empirical study of the performance of BIPNN on synthetic and real-world data, demonstrating that unsupervised neural network solvers outperform classic BIP solvers such as SCIP and Tabu in tackling large-scale nonlinear BIP problems. 3) Large-scale nonlinear optimization has long been challenging due to its inherent complexity and scalability issues. We advance this field by employing several nonlinearity modeling methods for BIP, including the polynomial reformulation and unconstrained reformulation. These methods provide instructive guidance for unsupervised neural network-based solvers. 2 Notations and Definitions In the following, we will formulate the BIP problem and articulate the definition of hypergraphs. Definition 1 (Formulation of BIP). Non-linear BIP is an optimization problem where the decision variables x= (x1, x2, ..., x m)are restricted to binary values ( 0or1), and the objective function OBIPor constraints (or both) are nonlinear. Below is the general formulation. min OBIP=f(x) s.t. gk(x)≤0 for all k= 1,2, . . . , K ql(x) = 0 for all l= 1,2, . . . , L xi∈ {0,1}for all i= 1,2, . . . , n(1) where f(x),gk(x)andql(x)are nonlinear functions of the decision variables x. □ Definition 2 (Hypergraph). A hypergraph is defined by G= (V, E), where V={v1, v2, ..., v |V|}stands for a set of vertices and E={e1, e2, ..., e |E|}denotes a set of hyperedges. Each hyperedge ej∈Eis a subset of V. A hypergraph Gcan be represented by the incidence matrix (Fig. 1 at the bottom) H∈ {0,1}|V|×|E|, where Hij= 1ifvi∈ej, or otherwise Hij= 0. □ 3 BIPNN: HyperGNN-based Optimizer for PUBO-formulated BIP For easier comprehension of our approach, in this section we first elaborate how to solve an unconstrained, PUBO- formulated BIP problem as depicted in Eq. 2. Then, in Sec. 4, we will show how to transform a general BIP problem with constraints and nonlinear terms into PUBO formalism. 3.1 Modeling PUBO-formulated BIPs via Hypergraphs BIPNN employs a HyperGNN-based optimizer (upper right of Fig. 1) to solve PUBO-formulated BIP problems. Inspired by the binary characteristic of variables, we can reformulate general BIPs as PUBO problems through the polynomial reformulation in Sec.4.1 and unconstrained reformulation in Sec.4.2. A PUBO problem is to optimize the cost function: OPUBO =X iQixi+X i,jQijxixj+X i,j,kQijkxixjxk+··· | https://arxiv.org/abs/2505.20997v1 |
(2) where xi∈ {0,1}are binary descision variables and the set of all decision variables is denoted by x= (x1, x2,···, xm). As shown in Fig. 2, for ease of representation, a PUBO objective OPUBO withnterms can be decomposed into two components: the PUBO matrix Q= [Q1, Q2, ..., Q n], and nlinear or polynomial terms such as xi,xixj, orxixjxk. 3 APREPRINT - M AY28, 2025 3215 44 433 212 11 xxxQxQxxQxxQxQ O PUBO 4 4~xe 321 5~ xxxe ],,,,[5 4 3 2 1 QQQQQ matrix Incidence 1e 2e 3e 4e 5e 1x 2x 3x 4x : matrix PUBO : terms Polynomial 1 1~xe 43 3~xxe 21 2~xxe 1 0 1 1 1 1 1 0 1 0 0 0 0 0 1 0 0 0 0 1 1x 2x 2e 3e 5e 1e 3x 4x 4e :H Figure 2: Modeling PUBO-formulated BIPs via hypergraphs. In this way, we discover multi-variable interactions in OPUBO can be modeled as a hypergraph G= (V, E), where |E|=n, and each hyperedge e∈Eencodes a single descision variable xior a polynomial term such as xixjor xixjxk. 3.2 Neural Network-based Optimizer The training workflow of the neural network-based optimizer is illustrated at the bottom of Fig. 1. HyperGNN Architecture. Initially, for a PUBO-transformed hypergraph G= (V, E), HyperGNNs take the incidence matrix HofGand a randomly initialized X(0)∈Rm×das inputs. Subsequently, BIPNN applies the sigmoid function to produce the output vector x= (x1, x2,···, xm), where xi∈[0,1]are the relaxation of decision variables xi∈ {0,1}. The HyperGNN model operates as follows: x= sigmoid(HyperGNN( H, X(0))) (3) where HyperGNN is a multi-layer hypergraph convolutional network, such as HGNN+ [ 17], HyperGCN [ 18], or UniGCN [19]. Training to Optimize. As an unsupervised learning model, BIPNN relaxes the PUBO objective OPUBO into a differentiable loss function and trains to optimize it. Specifically, OPUBO can be expressed by the output xand the incidence matrix Has depicted in Fig. 1. We aim to find the optimal solution xs= argmin OPUBO (x, H). As training progresses, xi∈xwill gradually converge to binary solutions. GPU-accelerated Training. For a large-scale BIP problem, numerous polynomial terms in OPUBO lead to a high computational cost. To address this, an intuitive idea is to leverage GPU-supported matrix operations to accelerate training. However, PUBO problems lack a straightforward matrix formulation. To this end, we propose GPU-accelerated PUBO objective as follows. OPUBO = ColM( x⊙(B)H+ (1−H))QT(4) where xis the output of HyperGNN, His the incidence matrix, and Q= [Q1, Q2, ..., Q n]is the PUBO matrix. More concretely, x⊙(B)Hdenotes the element-wise Hadamard product with broadcasting between m-dimensional vector x and matrix H∈Rm×n. We add 1−Honx⊙(B)Hto fill zero-valued elements with 1. Based on this operation, we use the column-wise multiplication denoted by ColM on the first dimension of the matrix obtained by x⊙(B)H+ (1−H). Through the ColM operation we obtain an n-dimensional vector, of which each element represents a polynomial term inOPUBO . The final loss function is computed by scaling each polynomial term with its respective coefficient Qi. The detailed explanation is illustrated in Fig. 1. Time Complexity Analysis. Forx∈Rm,Q∈R1×n, andH∈Rm×n, the time complexity of | https://arxiv.org/abs/2505.20997v1 |
Eq. 4 is O(m×n). For GPU-accelerated training, element-wise operations such as Hadamard product are fully parallelizable. Column-wise product over mleads to time complexity O(logm). Thus, the theoretical best GPU time complexity is O(logm). Utilizing Tcores, the realistic GPU time complexity is O(m×n T). Annealing Strategy. To achieve unsupervised learning, BIPNN relaxes PUBO problems into continuous space. The differentiable relaxation of discrete decision variables sometimes leads to continuous solutions xi∈[0,1]. To address this, we employ the continuous relaxation annealing (CRA) [ 20] method. Specifically, BIPNN uses the following loss function: OPUBO = ColM( x⊙(B)H+ (1−H))QT+ϕ(x), where ϕ(x) =γPn i=1(1−(2xi−1)α)is the penalty term, γcontrols the penalty strength and αis an even integer. We initialize γ <0and gradually increase it to a positive value as training progresses. The annealing strategy enhances the performance of BIPNN in three aspects, (i)In the high-temperature phase ( γ <0), it smooths the HyperGNN, preventing it from getting trapped in local optima; (ii)In the low-temperature phase ( γ >0), it enforces the discreteness of solutions; (iii)It effectively accelerates the training process. 4 APREPRINT - M AY28, 2025 4 BIPNN: Polynomial & Unconstrained Reformulation of BIP In this section, we explain how to reformulate nonlinear BIPs as unconstrained and polynomial optimization problems, which are compatible with our neural network-based optimizer. 4.1 Polynomial Reformulation of BIP Our approach is inspired by the observation that for any binary variable, a nonlinear term such as excan be exactly fitted by a polynomial equivalent h(x) =ax+b, such that h(x) =exforx∈ {0,1}. That is, h(x) = (e−1)x+ 1, where h(0) = 1 andh(1) = e. To handle univariate nonlinearities, including trigonometric, logarithmic, and exponential terms (e.g., sinx,logx, andex), we have the following transformation: h(x) = (h(1)−h(0))x+h(0). For multivariate terms such as exixjandsin(xixj), where xixj∈ {0,1}, we can perform the transformation as follows: h(Q i∈Sxi) = (h(1)−h(0))Q i∈Sxi+h(0). BIPNN employs a more general method to handle more intricate multivariate nonlinear terms (such as sin(xi+xj)). For a set of binary decision variables x1, x2, ..., x n, a non-linear function h(x1, x2, ..., x n)can be transformed into the polynomial forms as follows. h(x1, x2, ..., x m) =X S⊆{1,2,...,m}cSY i∈Sxi (5) By setting up a system of equations based on all possible combinations of x1, x2, ..., x m, we can determine the coefficients cSto precisely fit h(x1, x2, ..., x m)by leveraging simple inclusion-exclusion principle (refer to Appendix A) as below. cS=X T⊆S(−1)|S|−|T|f(T) (6) where f(T)represents the function value when the variables in the subset Tare1and the others are 0. For each subset S, it needs to calculate 2|S|values of f(T). □ As an example, we have sin(x1+x2) = 0 .8415x1+ 0.8415x2−0.7737x1x2. A toy example of sin(x1+x2+x3) is illustrated in Appendix A. To be noticed, polynomial reformulation of all nonlinear terms in a BIP objective is not necessary. If the transformation becomes overly complex, we may opt to retain the original nonlinear term and directly incorporate it as part of the loss function of HyperGNN. 4.2 Unconstrained Reformulation of BIP We propose a novel penalty method to transform the constrained BIP problem into an unconstrained form. | https://arxiv.org/abs/2505.20997v1 |
In penalty methods [ 21,22], unconstrained reformulation is achieved by adding "penalty terms" to the objective function that penalize violations of constraints. A well-constructed penalty term must be designed such that it equals 0if and only if the constraint is satisfied, and takes a positive value otherwise. Specifically, given a BIP problem in Eq. 1, for inequality constraints gk(x)≤0, we have penalty terms Pk(x) =λk·(max (0 , gk(x)))2, for equality constraints ql(x) = 0 , we have penalty terms Ql(x) =µl·(ql(x))2, where λk, µlare sufficiently large penalty coefficients. By combining all terms into a single objective function, we have an unconstrained BIP objective: min OBIP=f(x) +KX k=1λk·(max (0 , gk(x)))2+LX l=1µl·(ql(x))2(7) As part of the loss function of BIPNN, OBIPmust be differentiable to enable gradient-based optimization. However, max (0 , gk(x))is not a continuously differentiable function, thus finding an appropriate penalty term is crucial. We propose two methods to address this issue: 1)ReLU -based Penalty . We can use ReLU( gk(x))2= (max(0 , gk(x)))2to handle constraints. This is a general method for a large number of variables xiin a constraint gk(x). 2)Polynomial Penalty . In the following, we present an algorithm to construct polynomial penalty terms with 2∆time complexity for gk(x), where ∆is the number of variables in constraint gk(x). For binary variables, do there exist polynomial penalty terms that correspond to BIP constraints? To answer this question, we have the following discussion. For x1+ 2x2−2≤0, we observe that the violating subset {x1= 1, x2= 1} corresponds to polynomial penalty term λ(x1x2). For another constraint x1+ 3x2−2≤0, the violating subsets {x1= 0, x2= 1}and{x1= 1, x2= 1}correspond to the polynomial penalty term λ(x2+x1x2)orλx2. Through an in-depth analysis, we propose a novel method to transform nonlinear BIP constraints into polynomial penalty terms. To 5 APREPRINT - M AY28, 2025 43 32 31 21 4 3 2 1 2 3 2 2 2 xx xx xx xx x x x x −−−−+++ 1x 2x 4x 3x 1e 2e 3e 1x 2x 4x 3x 1e 2e 3e min BIPNN problemcut max− 1x 2x 4x 3x 1e 2e 3e 1x 2x 4x 3x 1e 2e 3e 4e 1x 2x 4x 3x 1e 2e 3e 4e Loss PUBO modeling hypergraph optimize to training? ? Figure 3: To solve the hypergraph max-cut problem, BIPNN generates a new hypergraph structure. However, both of these hypergraphs can be utilized for training the HyperGNN model. handle an inequality constraint g(x)≤0for the BIP problem in Eq. 1, our method consists of three steps (to see a toy example, refer to Appendix B): (i)Initially, we express the constraint g(x)≤0as a boolean indicator function: ψ(x) =1ifg(x)>0 (violation ) 0otherwise (feasible ), then define minimal violation subsets Vas the smallest variable combinations causing constraint violations: V= S⊆ {1, ..., n} ψ(x) = 1 when xi= 1∀i∈Sandxj= 0∀j /∈S (8) eachS∈ V cannot be reduced further without eliminating the violation. (ii)Generate a penalty term for each minimal violation subset S∈ V: P(x) =λX S∈VY i∈Sxi (9) where λis the penalty coefficient. (iii)Combine each term into the BIP objective function: min OBIP=f(x) +P(x) (10) In the worst | https://arxiv.org/abs/2505.20997v1 |
case, when an enumeration method is used in step (i), it requires calculating 2∆subsets, where ∆is the number of variables in constraint g(x). Nevertheless, in most real-world problems (e.g. max-cut, and maximal independent set or MIS) involving graphs, the variables associated with each constraint often exhibit locality. □ The polynomial penalty method facilitates to incorporate penalty terms to PUBO objectives and use GPU-accelerated training pipeline to solve BIPs. As far as we know, only a few number of constraint/penalty pairs [ 22] associated have been identified in existing literature. Our work significantly expands the potential application domains of the penalty method. 5 Discussion Feasible Solutions. Firstly, a PUBO problem always has feasible solutions. The feasible set is the entire space of binary variable combinations, since there are no constraints to exclude any combination. Every possible binary assignment xi∈ {0,1}is inherently feasible. Secondly, the feasibility of a nonlinear BIP problem depends on the constraint compatibility—whether there exists at least one binary variable assignment x∈ {0,1}mthat satisfies all nonlinear constraints simultaneously. In BIPNN, we determine the existence of feasible solutions through (i)Training-phase feasibility check: if all penalty terms (e.g., constraint violations) converge to zero during training, feasible solutions exist; otherwise, the problem is infeasible. (ii)Post-training verification: we sample candidate solutions from the trained model and explicitly verify whether they satisfy all constraints. The Effectiveness of BIPNN’s Hypergraph Generation Mechanism. As depicted in Fig. 3, when BIPNN is applied to solve combinatorial optimization (CO) problems on hypergraphs, it generates an alternative hypergraph structure. However, both of the hypergraphs can be used as the input of BIPNN. A critical question arises: which type of hypergraph structure achieves better performance when applied to HyperGNN? The main difference between these two hypergraphs is that the hypergraph generated by BIPNN breaks down the original hypergraph’s high-order hyperedges into numerous low-order ones. We argue that BIPNN training with the original hypergraph structure is more computationally efficiency, while BIPNN-generated hypergraph structure leads to more optimal solutions. In Sec. 6.3, we will empirically compare the solution quality of both methods. 6 APREPRINT - M AY28, 2025 (a) SCIP, d= 4. (b) SCIP, d= 6. (c) SCIP, d= 4. (d) SCIP, d= 6. (e) Tabu, d= 4. (f) Tabu, d= 6. Figure 4: Comparison of BIPNN and existing BIP solvers. dis the degree of polynomial terms in BIP objective functions. (a)(b) show the solving time required for BIPNN and SCIP to obtain the same solution. (c)(d) show the ratio of the solutions of BIPNN to SCIP; (e)(f) illustrate the ratio of the solutions of BIPNN to Tabu; Runtime is restricted to 1 hour. 6 Experimental Results In this section, we describe our empirical experiments on BIPNN and baseline optimization tools. Benchmarks . To evaluate BIPNN on BIP problems with diverse scales, the datasets are generated using DHG library3. To evaluate the quality of solutions and computational efficiency of BIPNN, datasets of varying scales are generated in three steps: Initially, DHG library is applied to generate hypergraph structures (where |E|= 2|V|). Subsequently, a random coefficient is assigned to each hyperedge (representing a polynomial term) | https://arxiv.org/abs/2505.20997v1 |
to generate PUBO objective functions. Thereafter, several constraints (penalty terms) were randomly incorporated into the PUBO objectives. To demonstrate the effectiveness of BIPNN on real-world settings, we also conduct experiments on the hypergraph max-cut problem (refer to Appendix C), a well-known BIP problem benchmark. Moreover, we conduct experiments on publicly-available hypergraph datasets (refer to Appendix D). Baseline Methods. In our experiments, the baseline methods include optimization techniques and tools such as SCIP [14], Tabu search [23]. Implementation Details . Experiments are conducted on an Intel Core i9-12900K CPU with 24 cores, and an NVIDIA GeForce RTX 3090 GPU with 24 G of memory. We adopt two-layer HGNN+ [ 17] as the HyperGNN model for the experiments. 6.1 Comparison with Linearization-based BIP Solvers SCIP. SCIP is an exact solver based on the branch-and-cut algorithm. Theoretically, given sufficient time and computational resources, SCIP guarantees an exact solution. However, for large-scale problems, due to time constraints, SCIP may terminate prematurely and return the approximate solution. To conduct the experiment, we generate a specific BIP instance for each size of variables. Specifically, for a BIPNN-generated hypergraph, the number of vertices (variables) |V|ranges from 200to3000 . The degrees of vertices are set to 4(Fig. 4a) and 6(Fig. 4b) respectively. Fig. 4a and Fig. 4b show the comparison of the solving time for BIPNN and SCIP. We evaluate the solving time taken by BIPNN to obtain the best approximate solution and the time required by SCIP to find the same solution. Experimental 3https://deephypergraph.readthedocs.io/en/latest/index.html 7 APREPRINT - M AY28, 2025 Table 1: The solutions of graph/hypergraph max-cut problems ( 1-hour time limit). Method BAT EAT UAT DBLP CiteSeer Primary High Cora SCIP 655 3,849 7,899 2,869 3,960 7,603 4,599 1,215 Tabu 652 3,972 8,402 2,710 3,717 8,500 5,160 1,360 BIPNN 651 3,978 8,407 2,801 3,852 8,509 5,216 1,384 (a)d= 4. (b)d= 6. (c)d= 4. (d)d= 6. Figure 5: Comparison of the quality of solutions and time efficiency of BIPNN when it applys its generated hypergraph structure or the original hypergraph structure to solve hypergraph max-cut problems. dis the degree of polynomial terms in BIP objective functions. (a)(b) show the numbers of cuts; (c)(d) show the solving time. results demonstrate that the solving time of BIPNN grows linearly and slowly with increasing problem size, while SCIP’s solving time exhibits exponential growth. This trend becomes more pronounced when the degree of polynomial terms is 6. Moreover, we impose a 1-hour time limit and evaluate the solution quality of BIPNN and SCIP across varying scales of BIP instances. Fig. 4c and Fig. 4d show the comparative ratio of solutions obtained by BIPNN and SCIP. Specifically, the comparative ratio is defined asOs BIPNN Os SCIP, where Os BIPNN andOs SCIP are the solutions obtained by BIPNN and SCIP. Experimental results demonstrate that BIPNN starts outperforming SCIP when the number of variables exceeds 2,500 when d= 4. As the problem size increases, BIPNN’s solutions increasingly outperform SCIP’s solutions. For d= 6, BIPNN outperforms SCIP when the number of vertices exceeds 1,000. Tabu Search. Tabu search is a heuristic method that typically provides approximate solutions. | https://arxiv.org/abs/2505.20997v1 |
We also impose a 1-hour time limit and evaluate the difference in solution quality for Tabu when the degrees of polynomial terms are set to 4and6. The number of vertices (variables) |V|in the hypergraph generated by BIPNN ranges from 200to5,000. Experimental results are depicted in Fig. 4e ( d= 4) and Fig. 4f ( d= 6). As shown in the figures, BIPNN achieves the performance comparable to Tabu when the number of variables exceeds 1,000. When the number of variables exceeds 2,500, BIPNN significantly outperforms Tabu as the variable count increases further. 6.2 Comparison on Real-world Datasets We compare our method against baseline methods on real-world graph and hypergraph datasets, including BAT, EAT, UAT, DBLP, CiteSeer, Primary, High, and Cora (refer to Appendix D). Graph datasets include BAT, EAT, UAT, DBLP, and CiteSeer. Hypergraph datasets include Primary, High, and Cora. Graph and hypergraph max-cut problems are selected as the BIP problem benchmarks. We impose 1hour time limit and evaluate the number of cuts obtained by BIPNN, SCIP, and Tabu. As depicted in Tab. 1, SCIP achieved the best performance on three graph datasets, while BIPNN achieved the best performance on two graph datasets and all three hypergraph datasets. In summary, compared to the graph max-cut problem, due to higher degree of polynomial terms in the objective function of the hypergraph max-cut problem, BIPNN tends to achieve better performance on hypergraph datasets. 6.3 Comparative Analysis on Hypergraph Generation Mechanism In Sec. 5 and Fig. 3, we propose to evaluate the effectiveness of BIPNN’s hypergraph generation mechanism by comparing the effects of its generated hypergraph structures against the original hypergraph structures in a hy- pergraph CO problem. In this section, we select hypergraph max-cut as benchmark and conduct experiments to evaluate the performance of BIPNN under both of the hypergraph structures. Experimental results are depicted in 8 APREPRINT - M AY28, 2025 Fig. 5. The number of variables ranges from 100to2000 . The degrees of polynomial terms dare set to d= 4 andd= 6 respectively. We perform 10 tests each time and record the average value of the cut numbers. As illustrated in Fig. 5a and Fig. 5b, the hypergraph structure generated by BIPNN can identify more cuts in com- parison. However, as depicted in Fig. 5c and Fig. 5d, when the parameter dis larger, the number of hyperedges (polynomial terms in PUBO objectives) in the hypergraph structure generated by BIPNN increases sharply, leading to significantly higher computational costs. The results align with the theoretical analysis we presented in Sec. 5. Figure 6: Comparison of the training time for BIPNN with or without GPU accelerated algo- rithm for PUBO losses.6.4 Ablation Study GPU Acceleration. The superior time efficiency of BIPNN is primarily attributed to the GPU-accelerated algorithm employed in computing large-scale PUBO loss functions. Fig. 6 shows a comparison of the training times for BIPNN with or without the GPU-accelerated algorithm. We evaluate the training time of BIPNN on the hypergraph max-cut problem. The number of variables ranges from 200to1000 . The degree of polynomial terms is set to 4. We train BIPNN for | https://arxiv.org/abs/2505.20997v1 |
a fixed number of 1000 epochs. As Fig. 6 illustrates, when GPU acceleration is applied to compute the PUBO loss function, the training time does not exhibit significant growth with an increasing number of variables. In contrast, without GPU acceleration, the training time increases rapidly as the number of variables rises. Annealing Strategy. We validate the effectiveness of the annealing strategy of BIPNN on the hypergraph max-cut problem. The experiments are conducted on Cora with 1,330vertices. The metrics include the number of cuts and discreteness of variables. The penalty strength γ is set to −2.5initially and its value is gradually increased during training. The value of γreaches 0after500epochs and continued to increase thereafter. As illustrated in Fig. 7, the annealing strategy ensures BIPNN to get better solutions while guaranteeing all variables to converge to discrete values. It demonstrates that negative γvalues enable BIPNN to escape local optima, thereby discovering better solutions. Moreover, when γis set to positive values, it facilitates the convergence of variables toward discrete values. 7 Conclusion Figure 7: Quality and discrete- ness of solutions with or without the annealing strategy.This work proposes BIPNN, a novel neural network solver for nonlinear BIP prob- lems. It reformulates nonlinear BIPs into PUBO cost functions, which correspond to hypergraph structures. On this basis, these PUBO cost functions are used as loss functions for HyperGNNs, enabling the model to solve BIPs in an unsupervised training manner. Compared with existing BIP solvers (e.g., SCIP) that rely on linearization, BIPNN reduces the training cost by optimizing nonlinear BIPs via straightforward gradient descent. Empirical results demonstrate that BIPNN achieves state-of-the-art performance in learning approximate solutions for large-scale BIP problems. 9 APREPRINT - M AY28, 2025 References [1]Yan Qiao, Yanjun Lu, Jie Li, Siwei Zhang, Naiqi Wu, and Bin Liu. An efficient binary integer programming model for residency time-constrained cluster tools with chamber cleaning requirements. IEEE Transactions on Automation Science and Engineering , 19(3):1757–1771, 2021. [2]Theodore P Papalexopoulos, Christian Tjandraatmadja, Ross Anderson, Juan Pablo Vielma, and David Belanger. Constrained discrete black-box optimization using mixed-integer programming. In International Conference on Machine Learning , pages 17295–17322. PMLR, 2022. [3]Libin Wang, Han Hu, Qisen Shang, Haowei Zeng, and Qing Zhu. Struc- turedmesh: 3-d structured optimization of façade components on photogram- metric mesh models using binary integer programming. IEEE Transactions on Geoscience and Remote Sensing , 62:1–12, 2024. [4]Giacomo Nannicini, Lev S Bishop, Oktay Günlük, and Petar Jurcevic. Optimal qubit assignment and routing via integer programming. ACM Transactions on Quantum Computing , 4(1):1–31, 2022. [5]Akshay Ajagekar, Kumail Al Hamoud, and Fengqi You. Hybrid classical- quantum optimization techniques for solving mixed-integer programming prob- lems in production scheduling. IEEE Transactions on Quantum Engineering , 3:1–16, 2022. [6]Lei Fan and Zhu Han. Hybrid quantum-classical computing for future network optimization. IEEE Network , 36(5):72–76, 2022. [7]Mercè Llabrés, Gabriel Riera, Francesc Rosselló, and Gabriel Valiente. Align- ment of biological networks by integer linear programming: virus-host protein- protein interaction networks. BMC bioinformatics , 21(Suppl 6):434, 2020. [8]Jianshen Zhu, Naveed Ahmed Azam, Fan Zhang, Aleksandar Shurbevski, Kazuya Haraguchi, Liang Zhao, Hiroshi Nagamochi, and Tatsuya Akutsu. A novel method for | https://arxiv.org/abs/2505.20997v1 |
inferring chemical compounds with prescribed topological substructures based on integer programming. IEEE/ACM Transactions on Computational Biology and Bioinformatics , 19(6):3233–3245, 2021. [9]Vladimir V Gusev, Duncan Adamson, Argyrios Deligkas, Dmytro Anty- pov, Christopher M Collins, Piotr Krysta, Igor Potapov, George R Darling, Matthew S Dyer, Paul Spirakis, et al. Optimality guarantees for crystal structure prediction. Nature , 619(7968):68–72, 2023. [10] Georgia Stinchfield, Joshua C Morgan, Sakshi Naik, Lorenz T Biegler, John C Eslick, Clas Jacobson, David C Miller, John D Siirola, Miguel Zamarripa, Chen Zhang, et al. A mixed integer linear programming approach for the design of chemical process families. Computers & Chemical Engineering , 183:108620, 2024. [11] Richard M Karp. Reducibility among combinatorial problems . Springer, 2010. [12] Elias B Khalil, Christopher Morris, and Andrea Lodi. Mip-gnn: A data-driven framework for guiding combinatorial solvers. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 36, pages 10219–10227, 2022. [13] Huigen Ye, Hua Xu, Hongyan Wang, Chengming Wang, and Yu Jiang. Gnn&gbdt-guided fast optimizing framework for large-scale integer program- ming. In International conference on machine learning , pages 39864–39878. PMLR, 2023. [14] Stephen Maher, Matthias Miltenberger, João Pedro Pedroso, Daniel Rehfeldt, Robert Schwarz, and Felipe Serrano. PySCIPOpt: Mathematical programming in python with the SCIP optimization suite. In Mathematical Software – ICMS 2016 , pages 301–307. Springer International Publishing, 2016. [15] Tobias Achterberg. Scip: solving constraint integer programs. Mathematical Programming Computation , 1:1–41, 2009. 10 APREPRINT - M AY28, 2025 [16] Narendra Karmarkar. A new polynomial-time algorithm for linear program- ming. In Proceedings of the sixteenth annual ACM symposium on Theory of computing , pages 302–311, 1984. [17] Yue Gao, Yifan Feng, Shuyi Ji, and Rongrong Ji. Hgnn+: General hyper- graph neural networks. IEEE Transactions on Pattern Analysis and Machine Intelligence , 45(3):3181–3199, 2022. [18] Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha Talukdar. Hypergcn: A new method for training graph convo- lutional networks on hypergraphs. Advances in neural information processing systems , 32, 2019. [19] Jing Huang and Jie Yang. Unignn: a unified framework for graph and hy- pergraph neural networks. In the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI) , 2021. [20] Yuma Ichikawa. Controlling continuous relaxation for combinatorial opti- mization. Advances in Neural Information Processing Systems (NeurIPS) , 37:47189–47216, 2024. [21] Jorge Nocedal and Stephen J Wright. Numerical optimization . Springer, 1999. [22] Fred Glover, Gary Kochenberger, Rick Hennig, and Yu Du. Quantum bridge analytics i: a tutorial on formulating and using qubo models. Annals of Operations Research , 314(1):141–183, 2022. [23] Fred Glover and Manuel Laguna. Tabu search . Springer, 1998. 11 APREPRINT - M AY28, 2025 A A toy example of the polynomial reformulation of BIP (Sec. 4). Forsin(x1+x2+x3), where x1, x2, x3∈ {0,1}, we can construct a polynomial to precisely fit the function, such that it matches sin(x1+x2+x3)for all combinations of x1, x2, x3∈ {0,1}. For multiple binary variables, the polynomial can be generalized as: P(x1, x2, x3) =a1x1+a2x2+a3x3+b12x1x2+b13x1x3+b23x2x3+cx1x2x3+d (11) Based on all possible combinations of x1, x2, x3, we can set up the following equations: 1) When x1= 0, x2= 0, | https://arxiv.org/abs/2505.20997v1 |
x3= 0:P(0,0,0) = d= sin(0) = 0 . Thus, d= 0. 2) When x1= 0, x2= 0, x3= 1:P(0,0,1) = a3= sin(1) ≈0.8415 . Thus, a3= 0.8415 . 3) When x1= 0, x2= 1, x3= 0:P(0,1,0) = a2= sin(1) ≈0.8415 . Thus, a2= 0.8415 . 4) When x1= 1, x2= 0, x3= 0:P(1,0,0) = a1= sin(1) ≈0.8415 . Thus, a1= 0.8415 . 5) When x1= 0, x2= 1, x3= 1:P(0,1,1) = a2+a3+b23= sin(2) ≈0.9093 . Substituting a2= 0.8415 anda3= 0.8415 :b23=−0.7737 . 6) When x1= 1, x2= 0, x3= 1:P(1,0,1) = a1+a3+b13= sin(2) ≈0.9093 Substituting a1= 0.8415 anda3= 0.8415 :b13=−0.7737 . 7) When x1= 1, x2= 1, x3= 0:P(1,1,0) = a1+a2+b12= sin(2) ≈0.9093 Substituting a1= 0.8415 anda2= 0.8415 :b12=−0.7737 8) When x1= 1, x2= 1, x3= 1:P(1,1,1) = a1+a2+a3+b12+b13+b23+c= sin(3) ≈0.1411 . Substituting known values: c=−0.0623 . Based on the above calculations, the polynomial is: P(x1, x2, x3) = 0 .8415( x1+x2+x3)−0.7737( x1x2+x1x3+x2x3)−0.0623x1x2x3 (12) B A toy example of the unconstrained reformulation of BIP (Sec. 4). For a nonlinear constraint with exponential term g(x):2x1+ex2+ 3x1x3≤5, where x1, x2, x3∈ {0,1}, we can find the minimal violation subsets Vbased on all possible combinations of x1, x2, x3. 1) When x1= 0, x2= 0, x3= 0:g(x) = 1≤5, feasible. 2) When x1= 0, x2= 0, x3= 1:g(x) = 1≤5, feasible. 3) When x1= 0, x2= 1, x3= 0:g(x) =e≤5, feasible. 4) When x1= 1, x2= 0, x3= 0:g(x) = 3≤5, feasible. 5) When x1= 0, x2= 1, x3= 1:g(x) =e≤5, feasible. 6) When x1= 1, x2= 0, x3= 1:g(x) = 6≥5, violation. 7) When x1= 1, x2= 1, x3= 0:g(x) =e+ 2≤5, feasible. 8) When x1= 1, x2= 1, x3= 1:g(x) = 5 + e≥5, violation (not minimal). Identified minimal violation subsets: {x1, x3}. Thus, P(x) =λ(x1x3) (13) Final BIP objective: OBIP=f(x) +λ(x1x3) (14) C The hypergraph max-cut problem. The max-cut problem of a hypergraph G= (V, E)involves partitioning the vertex set into two disjoint subsets such that the number of hyperedges crossing the partitioned blocks is maximized. 12 APREPRINT - M AY28, 2025 PUBO Form. The hypergraph max-cut problem on Gcan be formulated by optimizing a PUBO objective as follows: min Omax−cut=X e∈E(1−Y i∈exi−Y i∈e(1−xi)) (15) where xi∈ {0,1}are binary decision variables. For a simple example illustrated in Fig. 3, the original hypergraph consists of three hyperedges: {x1, x2},{x3, x4}, and {x1, x2, x3}. Thus, the max-cut objective of Gis to minimize 2x1+ 2x2+ 2x3+x4−3x1x2−x1x3−x2x3−2x3x4. BIPNN typically generates a new hypergraph structure with five hyperedges, {x1, x2},{x3, x4},{x1, x3}, and{x2, x3}, to solve this PUBO objective. we found that both hypergraphs can be utilized for HyperGNN training in BIPNN framework. D Datasets. Table 2: Summary statistics of five real-world graphs: the number of vertices |V|, the number of edges |E|. Three hypergraphs: the number of vertices |V|, the number of hyperedges |E|, the size of the hypergraphP e∈E|e|. Graphs |V| | E| Hypergraphs |V| | E|P e∈E|e| BAT 131 1,003 Primary 242 12,704 30,729 EAT 399 5,993 High 327 7,818 18,192 UAT 1,190 13,599 Cora 1,330 1,503 4,599 DBLP 2,591 3,528 CiteSeer 3,279 | https://arxiv.org/abs/2505.20997v1 |
arXiv:2505.21012v1 [cs.LG] 27 May 2025FEDERATED INSTRUMENTAL VARIABLE ANALYSIS VIA FEDERATED GENERALIZED METHOD OF MOMENTS Geetika, Somya Tyagi, Bapi Chatterjee∗ Department of Computer Science and Engineering, IIIT Delhi New Delhi, India {geetikai, somya23005, bapi}@iiitd.ac.in ABSTRACT Instrumental variables (IV) analysis is an important applied tool for areas such as healthcare and consumer economics. For IV analysis in high-dimensional settings, the Generalized Method of Moments (GMM) using deep neural networks offers an efficient approach. With non-i.i.d. data sourced from scattered decentralized clients, federated learning is a popular paradigm for training the models while promising data privacy. However, to our knowledge, no federated algorithm for either GMM or IV analysis exists to date. In this work, we introduce federated instrumental variables analysis ( FEDIV) via federated generalized method of moments ( FEDGMM ). We formulate FEDGMM as a federated zero-sum game defined by a federated non-convex non-concave minimax optimization problem, which is solved using federated gradient descent ascent ( FEDGDA ) algorithm. One key challenge arises in theoretically characterizing the federated local optimality. To address this, we present properties and existence results of clients’ local equilibria via FEDGDA limit points. Thereby, we show that the federated solution consistently estimates the local moment conditions of every participating client. The proposed algorithm is backed by extensive experiments to demonstrate the efficacy of our approach. Keywords Federated Learning ·Generalized Method of Moments ·Instrumental Variables Analysis ·Causal Inference 1 Introduction Federated Learning (FL) (McMahan et al., 2017) over scattered clients without data sharing is now an established paradigm for training Machine Learning (ML) models. The data privacy makes it attractive for applications to healthcare (Nguyen et al., 2022; Antunes et al., 2022; Oh and Nadkarni, 2023), finance and banking (Byrd and Polychroniadou, 2020; Long et al., 2020), smart cities and mobility (Zheng et al., 2022; Gecer and Garbinato, 2024), drug discovery (Oldenhof et al., 2023) and many others (Ye et al., 2023). However, the existing research in FL primarily focuses on supervised learning (Kairouz et al., 2021), which struggles to predict the outcomes due to confounding variables not observed in training data. For example, consider the Nature Medicine report by Dayan et al. (2021) on a global-scale FL to predict the effectiveness of oxygen administration (a treatment variable) to COVID-19 patients in the emergency rooms while maintaining their privacy. It is known that COVID-19 revival rates are highly influenced by lifestyle-related factors such as obesity and diabetes (Wang, Sato, and Sakuraba, 2021), other co-morbidities (Russell, Lone, and Baillie, 2023), and the patients’ conditions at the emergency care admission time (Izcovich et al., 2020). Arguably, the Dayan et al. (2021)’s approach may over- or under-estimate the effects of oxygen treatment. ∗This work is supported in part by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR/CEFIPRA) through the FedAutoMoDL project, the Infosys Center for Artificial Intelligence (CAI) at IIIT-Delhi through the Scalable Federated Learning project. Geetika is partially supported by the INSPIRE fellowship No: DST/INSPIRE Fellowship/[IF220579] offered by the Department of Science & Technology (DST), Government of India. Bapi Chatterjee also acknowledges support by Anusandhan National Research Foundation under project | https://arxiv.org/abs/2505.21012v1 |
SRG/2022/002269. Federated IV Analysis via Federated GMM, Geetika et al. One can address the above issue by observing and accommodating every confounding latent factor that may influence the outcome. Thus, it may require that obesity, diabetes, overall health at the time of admission, and even genetic factors are accommodated; for example, using a technique such as matching (Kallus, 2020b; Kallus, 2020a). It may potentially render the treatment variable undergo a randomized controlled trial such as A/B testing (Kohavi et al., 2013) on decentralized, scattered, and possibly private data. However, to our knowledge, these techniques are yet unexplored in the realms of FL. Alternatively, one could assume conditional independence between unobserved confounders and the treatment variable, for example, the works by Shalit, Johansson, and Sontag (2017) and Imai and Li (2023), etc. However, this may not be a fair approach for an application such as the federated estimation of effectiveness of oxygen therapy (Dayan et al., 2021). To elaborate, Liang et al. (2023) suggests the hypoxia-inducible factors (HIF) – a protein that controls the rate of transcription of genetic information from DNA to messenger RNA by binding to a specific DNA sequence (Latchman, 1993) – plays a vital role in oxygen consumption at the cellular level. The machine learning model developed by FL implementation of Dayan et al. (2021) would miss the crucial counterfactual scenarios, such as HIF levels among patients undergoing oxygen therapy impacting morbidity outcomes, should it assume conditional independence between effects of oxygen treatment and every confounder. Such variables can be often traced in applications such as industry-scale federated drug discovery by AstraZeneca (Oldenhof et al., 2023). Instrumental variables (IV) provide a workaround to both the above issues under the assumption that the latent confounding factor influences only the treatment variable but does not directly affect the outcome. In the above example, the measure of HIF works as an instrumental variable that affects oxygen treatment as in its effective organ-level consumption but does not directly affect the mortality of the COVID-19 patient (Dayan et al., 2021). IV can play an important role in a federated setting as the influence assumption between the confounders and the treatment variables will remain local to the clients. IV analysis has been comprehensively explored in econometrics (Angrist and Krueger, 2001; Angrist and Pischke, 2009) with several decades of history such as works of Wright (1928) and Reiersøl (1945). Its efficiency is now accepted for learning even high-dimensional complex causal relationships such as one in image datasets (Hartford et al., 2017; Bennett, Kallus, and Schnabel, 2019). Naturally, the growing demand of FL entails designing methods for federated IV analysis, which, to our knowledge, is yet unexplored. In the centralized deep learning setting, Hartford et al. (2017) introduced an IV analysis framework, namely D EEPIV, which uses two stages of neural networks training – first for the treatment prediction and the second with a loss function involving integration over the conditional treatment distribution. The two-stage process has precursors in applying least square regressions in the two phases (Angrist and Pischke, 2009)[4.1.1]. In the same setting, another approach | https://arxiv.org/abs/2505.21012v1 |
for IV analysis applies the generalized method of moments (GMM) (Wooldridge, 2001). GMM is a celebrated estimation approach in social sciences and economics. It was introduced by Hansen (1982), for which he won a Nobel Prize in Economics (Steif et al., 2014). Building on (Wooldridge, 2001), Bennett, Kallus, and Schnabel (2019) introduced deep learning models to GMM estimation; they named their method DEEPGMM . Empirically, DEEPGMM outperformed DEEPIV.DEEPGMM is solved as a smooth zero-sum game formulated as a minimax optimization problem. Prior to DEEPGMM , Lewis and Syrgkanis (2018) also employed neural networks for GMM estimation. Their method, called the adversarial generalized method of moments ( AGMM ), also formulated the problem as a minimax optimization to fit a GMM criterion function over a finite set of unconditional moments. DEEPGMM differs from AGMM in using a weighted norm to define the objective function. The experiments in (Bennett, Kallus, and Schnabel, 2019) showed that DEEPGMM outperformed AGMM for IV analysis, and both won against DEEPIV. Nonetheless, to our knowledge, none of these methods have a federated counterpart. Minimax optimization has been studied in federated settings (Sharma et al., 2022; Wu et al., 2024), which potentially provides an underpinning for federated GMM. However, beyond the algorithm and its convergence results, there are a few key challenges: (A)For non-i.i.d. client-local data, describing common federated GMM estimators is not immediate. It requires characterizing a synchronized model state that fit moment conditions of every client. (B)To show that the dynamics of federated minimax optimization retrieves an equilibrium solution of the federated zero-sum game as a limit point. And, (C)Under heterogeneity, to establish that the federated game equilibria also satisfies the equilibrium requirements of every client thereby consistently estimating the clients’ local moments. In this work, we address the above challenges. Our contributions are summarized as the following: 2 Federated IV Analysis via Federated GMM, Geetika et al. 1.We introduce FEDIV: federated IV analysis. To our knowledge, FEDIVis the first work on IV analysis in a federated setting. 2. We present FEDDEEPGMM2– a federated adaptation of D EEPGMM of Bennett, Kallus, and Schnabel (2019) to solve F EDIV. F EDDEEPGMM is implemented as a federated smooth zero-sum game. 3.We show that the limit points of a federated gradient descent ascent ( FEDGDA ) algorithm include the equilibria of the zero-sum game. 4.We show that an equilibrium solution of the federated game obtained at the server consistently estimates the moment conditions of every client. 5.We experimentally validate our algorithm. The experiments show that even for heterogenous data, FEDDEEPGMM has convergent dynamics analogous to the centralized D EEPGMM algorithm. 1.1 Related work The federated supervised learning has received algorithmic advancements guided by factors such as tackling the system and statistical heterogeneities, better sample and communication complexities, model personalization, differential privacy, etc. An inexhaustible list includes FEDPROX (Li et al., 2020), SCAFFOLD (Karimireddy et al., 2020), FEDOPT (Reddi et al., 2020), LPP-SGD (Chatterjee, Kungurtsev, and Alistarh, 2024), PFEDME(T Dinh, Tran, and Nguyen, 2020), DP-SCAFFOLD (Noble, Bellet, and Dieuleveut, 2022), and others. By contrast, federated learning with confounders, which typically forms a | https://arxiv.org/abs/2505.21012v1 |
causal learning setting, is a relatively under-explored research area. V o et al. (2022a) presented a method to learn the similarities among the data sources translating a structural causal model (Pearl, 2009) to federated setting. They transform the loss function by utilizing Random Fourier Features into components associated with the clients. Thereby they compute individual treatment effects (ITE) and average treatment effects (ATE) by a federated maximization of evidence lower bound (ELBO). V o et al. (2022b) presented another federated Bayesian method to estimate the posterior distributions of the ITE and ATE using a non-parametric approach. Xiong et al. (2023) presented maximum likelihood estimator (MLE) computation in a federated setting for ATE estimation. They showed that the federated MLE consistently estimates the ATE parameters considering the combined data across clients. However, it is not clear if this approach is applicable to consistent local moment conditions estimation for the participating clients. Almodóvar, Parras, and Zazo (2024) applied FedAvg to variational autoencoder (Kingma, Welling, et al., 2019) based treatment effect estimation TEDV AE (Zhang, Liu, and Li, 2021). However, their work mainly focused on comparing the performance of vanilla FedAvg with a propensity score-weighted FedAvg in the context of federated implementation of TEDV AE. Our work differs from the above related works in the following: (a)we introduce IV analysis in federated setting, and, we introduce federated GMM estimators, which has applications for various empirical research (Wooldridge, 2001), (b)specifically, we adopt a non-Bayesian approach based on a federated zero-sum game, wherein we focus on analysing the dynamics of the federated minimax optimization and characterize the global equilibria as a consistent estimator of the clients’ moment conditions. Our work also differs from federated minimax optimization algorithms: Sharma et al. (2022), Shen et al. (2024), Wu et al. (2024), and Zhu et al. (2024), where the motivation is to analyse and improve the non-asymptotic convergence under various analytical assumptions on the objective functions. We primarily focus on deriving the equilibrium via the limit points of the federated GDA algorithm. 2 Preliminaries We model our basic terminologies after (Bennett, Kallus, and Schnabel, 2019) for a client-local setting. Consider a distributed system as a set of Nclients [N]with datasets Si={(xi j, yi j)}ni j=1,∀i∈[N]. We assume that for a client i∈ [N], the treatment and outcome variables xi jandyi j, respectively, are related by the process Yi=gi 0(Xi)+ϵi, i∈[N]. We assume that each client-local residual ϵihas zero mean and finite variance, i.e. E[ϵi] = 0,E[(ϵi)2]<∞.Furthermore, we assume that the treatment variables Xiare endogenous on the clients, i.e. E[ϵi|Xi]̸= 0,and therefore, gi 0(Xi)̸= E[Yi|Xi]. We assume that the treatment variables are influenced by instrumental variables Zi,∀i∈[N]so that P(Xi|Zi)̸=P(Xi). (1) 2Wu et al. (2023) used F EDGMM as an acronym for federated Gaussian mixture models. 3 Federated IV Analysis via Federated GMM, Geetika et al. Furthermore, the instrumental variables do not directly influence the outcome variables Yi,∀i∈[N]: E[ϵi|Zi] = 0. (2) Note that, assumptions 1, 2 are local to the clients, thus, honour the data-privacy requirements of a federated learning task. In this setting, we aim to discover a common or | https://arxiv.org/abs/2505.21012v1 |
global causal response function that would fit the data generation processes of each client without centralizing the data. More specifically, we learn a parametric function g0(.)∈G:= {g(., θ)|θ∈Θ}expressed as g0:=g(., θ0)forθ0∈Θ, defined by g(., θ0) =1 NNX i=1gi(., θ0). (3) The learning process essentially involves estimating the true parameter θ0byˆθ. To measure the performance of the learning procedure, we use the MSE of the estimate ˆg:=g(.,ˆθ)against the true g0averaged over the clients. 3 Federated Deep Generalized Method of Moments We adapt DEEPGMM (Bennett, Kallus, and Schnabel, 2019) in the local setting of a client i∈[N]. For a self-contained reading, we include the description here. 3.1 Client-local Deep Generalized Method of Moments (D EEPGMM) GMM estimates the parameters of the causal response function using a certain number of moment conditions . Define the moment function on a client i∈[N]as a vector-valued function fi:R|Z|→Rmwith components fi 1, fi 2, . . . , fi m. We consider the moment conditions as parametrized functions {fi j}m j=1∀i∈[N]with the assumption that their expectation is zero at the true parameter values. More specifically, using equation (2), we have E[fi j(Zi)ϵi] = 0,∀j∈[m],∀i∈[N], (4) We assume that mmoment conditions {fi j}m j=1at each client i∈[N]are sufficient to identify a unique federated estimate ˆθtoθ0. With (4), we define the moment conditions on a client i∈[N]as ψ(fi j;θ) = 0 ,∀j∈[m],where (5) ψ(fi;θ) =E[fi(Zi)ϵi] =E[fi(Zi)(Yi−gi(Xi;θ)). In empirical terms, the sample moments for the i-th client with nisamples are given by ψni(fi;θ) =Eni[fi(Z)ϵi] =1 niniX k=1fi(Zi k)(Yi k−gi(Xi k;θ)), (6) where ψni(fi;θ) = ψni(fi 1;θ), ψni(fi 2;θ), . . . , ψ ni(fi m;θ) is the moment condition vector, and ψni(fi j;θ) =1 niniX k=1fi j(Zi k)(Yi k−gi(Xi k;θ)). (7) Thus, for empirical estimation of the causal response function gi 0at client i∈[N], it needs to satisfy ψni(fi j;θ0) = 0 ,∀i∈[N]andj∈[m] (8) atθ=θ0. Equation (8) is reformulated as an optimization problem given by min θ∈Θ∥ψni(fi 1;θ), ψni(fi 2;θ), . . . , ψ ni(fi m;θ)∥2, (9) where we use the Euclidean norm ∥w∥2=wTw. Drawing inspiration from Hansen (1982), DEEPGMM used a weighted norm, which yields minimal asymptotic variance for a consistent estimator ˜θ, to cater to the cases of (finitely) large number of moment conditions. We adapt their weighted norm ∥w∥2 ˜θ=wTC−1 ˜θw, to a client-local setting via the covariance matrix C˜θdefined by C˜θ jl=1 niniX k=1fi j(Zi k)fi l(Zi k)(Yi k−gi(Xi k;˜θ))2. (10) 4 Federated IV Analysis via Federated GMM, Geetika et al. Now considering the vector space Vof real-valued functions, ψni(fi;θ) = ψni(fi 1;θ), ψni(fi 2;θ), . . . , ψ ni(fi m;θ) is a linear operator on Vand C˜θ(fi, hi) =1 niniX k=1fi(Zi k)hi(Zi k)(Yi k−gi(Xi k;˜θ))2(11) is a bilinear form. With that, for any subset Fi⊂ V, we define a function Ψni(θ,Fi,˜θ) = sup fi∈Fiψni(fi;θ)−1 4C˜θ(fi, fi), which leads to the following optimization problem. Lemma 1 (Lemma 1 of (Bennett, Kallus, and Schnabel, 2019)) .With the weighted norm defined by equation (10), and forFi=span({fi j}m j=1) ∥ψni(fi 1;θ), ψni(fi 2;θ), . . . , ψ ni(fi m;θ)∥2 ˜θ= Ψ ni(θ,Fi,˜θ). (12) Thus, a weighted reformulation | https://arxiv.org/abs/2505.21012v1 |
of (9) is given by θGMM∈arg min θ∈ΘΨni(θ,Fi,˜θ). (13) As the data-dimension grows, the function class Fiis replaced with a class of neural networks of a certain architecture, i.e.Fi={fi(z, τ) :τ∈ T } . Similarly, let Gi={gi(x, θ) :θ∈Θ}be another class of neural networks with varying weights. With that, define Ui ˜θ(θ, τ) :=1 niniX k=1fi(Zi k, τ) Yi k−gi(Xi k;θ) −1 4niniX k=1 fi(Zi k, τ)2 Yi k−gi(Xi k;θ)2(14) Then (13) is reformulated as the following θDGMM∈arg min θ∈Θsup τ∈TUi ˜θ(θ, τ).(15) Equation (15) forms a zero-sum game, whose equilibrium solution is shown to be a true estimator to θ0under a set of standard assumptions; see Theorem 2 in (Bennett, Kallus, and Schnabel, 2019). 3.2 Federated Deep GMM (F EDDEEPGMM) The federated generalized method moment ( FEDDEEPGMM ) needs to find the global moment estimators for the causal response function to fit data on each client. Thus, the federated counterpart of equation (5) is given by ψ(f;θ) =Ei[E[fi(Zi)(Yi k−gi(Xi;θ)]] = 0 , (16) where the expectation Eiis over the clients. In this work, we consider full client participation . Thus, for the empirical federated moment estimation, we formulate: ψn(f;θ) =1 NNX i=1ψni(fi;θ) =1 NNX i=11 niniX k=1fi(Zi k)(Yi k−gi(Xi k;θ)) (17) With that, the federated moment estimation problem following (13) is formulated as: θFedDeepGMM∈arg min θ∈Θ∥ψn(f;θ)∥2 ˜θ, (18) where ∥w∥˜θ=w⊤C−1 ˜θxis the previously defined weighted-norm with inverse covariance as weights. In general cases, we do not have explicit knowledge of the moment conditions of various clients. We propose FEDDEEPGMM , a “deep" reformulation of the federated optimization problem based on the neural networks of a given architecture shared among clients and is shown to have the same solution as the federated GMM problem formulated earlier. Lemma 2. LetF=span{fi j|i∈[N], j∈[m]}. An equivalent objective function for the federated moment estimation optimization problem (18) is given by: ∥ψN(f;θ)∥2 ˜θ= sup fi∈F ∀i∈[N]1 NNX i=1 ψni(fi;θ)−1 4C˜θ(fi;fi) ,where (19) ψni(fi;θ) :=1 niniX k=1fi(Zi k)(Yi k−gi(Xi k;θ)),andC˜θ(fi, fi) :=1 niniX k=1(fi(Zi k))2(Yi k−gi(Xi k;˜θ))2. 5 Federated IV Analysis via Federated GMM, Geetika et al. The detailed proof is similar to Lemma 1 and is given in Appendix C.1. The federated zero-sum game is then defined by: ˆθFedDeepGMM∈arg min θ∈Θsup τ∈TU˜θ(θ, τ) :=1 NNX i=1Ui ˜θ(θ, τ), (20) where Ui ˜θ(θ, τ)is defined in equation (14). The federated GMM formulation by a zero-sum game defined by a federated minimax optimization problem (20) provides the global estimator as its equilibrium solution. We solve (20) using the federated gradient descent ascent (F EDGDA) algorithm described next. 3.3 Federated Gradient Descent Ascent (F EDGDA) Algorithm An adaptation of the standard gradient descent ascent algorithm to federated setting is well-explored: (Deng and Mahdavi, 2021; Sharma et al., 2022; Shen et al., 2024; Wu et al., 2024). The clients run the gradient descent ascent algorithm for several local updates and then the orchestrating server synchronizes them by collecting the model states, averaging them, and broadcasting it to the clients. A detailed description is included as a pseudocode in Appendix B. Similar to (Bennett, Kallus, and Schnabel, 2019), we note that the | https://arxiv.org/abs/2505.21012v1 |
federated minimax optimization problem (20) is not convex-concave on (θ, τ). The convergence results of variants of FEDGDA (Sharma et al., 2022; Shen et al., 2024; Wu et al., 2024) assume that U˜θ(θ, τ)is non-convex on θand satisfies a µ−Polyak Łojasiewicz (PL) inequality on τ, see assumption 4 in (Sharma et al., 2022). PL condition is known to be satisfied by over-parametrized neural networks (Charles and Papailiopoulos, 2018; Liu, Zhu, and Belkin, 2022). The convergence results of our method will follow (Sharma et al., 2022). We include a formal statement in Appendix B. However, beyond convergence, we primarily aim to show that an optimal solution will consistently estimate the moment conditions of the clients, which we do next. 4 Federated Equilibrium Solutions In this section, we present our main results, which establish the existence and characterize the federated equilibrium solution. 4.1 Federated Sequential Game As minimax is not equal to maximin in general for a non-convex-non-concave problem, it is important to model the federated game as a sequential game (Jin, Netrapalli, and Jordan, 2020) whose outcome would depend on what move – maximization or minimization – is taken first. We use some results from Jin, Netrapalli, and Jordan (2020), which we include here for a self-contained reading. We start with the following assumptions: Assumption 1. Client-local objective Ui ˜θ(θ, τ)∀i∈[N]is twice continuously differentiable for both θandτ. Thus, the global objective U˜θ(θ, τ)is also a twice continuously differentiable function. Assumption 2 (Smoothness) .The gradient of each client’s local objective, ∇Ui ˜θ(θ, τ), is Lipschitz continuous with respect to both θandτ. For all i∈[N], there exist constants L >0such that: ∥∇θUi ˜θ(θ1, τ1)− ∇ θUi ˜θ(θ2, τ2)∥ ≤L∥(θ1, τ1)−(θ2, τ2)∥,and ∥∇τUi ˜θ(θ1, τ1)− ∇ τUi ˜θ(θ2, τ2)∥ ≤L∥(θ1, τ1)−(θ2, τ2)∥, ∀(θ1, τ1),(θ2, τ2). Thus, U˜θ(θ, τ)isL-Lipschitz smooth. Assumption 3 (Gradient Dissimilarity) .The heterogeneity of the local gradients with respect to (w.r.t.) θandτis bounded as follows: ∥∇θUi ˜θ(θ, τ)− ∇ θU˜θ(θ, τ)∥ ≤ζi θ ∥∇τUi ˜θ(θ, τ)− ∇ τU˜θ(θ, τ)∥ ≤ζi τ, where ζi θ, ζi τ≥0are the bounds that quantify the degree of gradient dissimilarity at client i∈[N]. Assumption 4 (Hessian Dissimilarity) .The heterogeneity in terms of hessian w.r.t. θandτis bounded as follows: ∥∇2 θθUi ˜θ(θ, τ)− ∇2 θθU˜θ(θ, τ)∥σ≤ρi θ, ∥∇2 ττUi ˜θ(θ, τ)− ∇2 ττU˜θ(θ, τ)∥σ≤ρi τ, ∥∇2 θτUi ˜θ(θ, τ)− ∇2 θτU˜θ(θ, τ)∥σ≤ρi θτ, ∥∇2 τθUi ˜θ(θ, τ)− ∇2 τθU˜θ(θ, τ)∥σ≤ρi τθ, where ρi θ, ρi τ, ρi θτ,andρi τθ≥0quantify the degree of hessian dissimilarity at client i∈[N]by spectral norm ∥.∥σ. 6 Federated IV Analysis via Federated GMM, Geetika et al. Assumptions 3 and 4 provide a measure of data heterogeneity across clients in a federated setting. We assume that ζ′s andρ′sare bounded. In the special case, when ζandρ’s are all 0, then the data is homogeneous across clients. We adopt the notion of Stackelberg equilibrium for pure strategies, as discussed in (Jin, Netrapalli, and Jordan, 2020), to characterize the solution of the minimax federated optimization problem for a non-convex non-concave function U˜θ(θ, τ)for the sequential game where min-player goes first and the max-player goes second. To avoid ambiguity between the | https://arxiv.org/abs/2505.21012v1 |
adjectives of the terms global/local objective functions in federated learning and the global/local nature of minimax points in optimization, we refer to a global objective as the federated objective and a local objective as the client’s objective. Definition 1 (Local minimax point) .[Definition 14 of (Jin, Netrapalli, and Jordan, 2020)] Let U(θ, τ)be a function defined over Θ× T and let hbe a function satisfying h(δ)→0asδ→0. There exists a δ0, such that for any δ∈(0, δ0],and any (θ, τ)such that ∥θ−ˆθ∥ ≤δand∥τ−ˆτ∥ ≤δ, then a point (ˆθ,ˆτ)is a local minimax point of U, if ∀(θ, τ)∈Θ× T, it satisfies: U˜θ(ˆθ, τ)≤U˜θ(ˆθ,ˆτ)≤ max τ′:∥τ′−ˆτ∥≤h(δ)U˜θ(θ, τ′), (21) With that, the first-order & second-order necessary conditions for local minimax points are as below. Lemma 3 (Propositions 18, 19, 20 of (Jin, Netrapalli, and Jordan, 2020)) .Under assumption 1, any local minimax point satisfies the following conditions: •First-order Necessary Condition: A local minimax point (θ, τ)satisfies: ∇θU˜θ(θ, τ) = 0 and∇τU˜θ(θ, τ) = 0 . •Second-order Necessary Condition: A local minimax point (θ, τ)satisfies: ∇2 ττU˜θ(θ, τ)⪯0.Moreover, if ∇2 ττU˜θ(θ, τ)≺0, thenh ∇2 θθU˜θ− ∇2 θτU˜θ ∇2 ττU˜θ−1∇2 τθU˜θi (θ, τ)⪰0. •Second-order Sufficient Condition: A stationary point (θ, τ)that satisfies ∇2 ττU˜θ(θ, τ)≺0, and h ∇2 θθU˜θ− ∇2 θτU˜θ ∇2 ττU˜θ−1∇2 τθU˜θi (θ, τ)≻0 guarantees that (θ, τ)is a strict local minimax. Now, in order to define the federated approximate equilibrium solutions, we first define an approximate local minimax point. Definition 2 (Approximate Local minimax point) .[An adaptation of definition 34 of (Jin, Netrapalli, and Jordan, 2020)] Let U(θ, τ)be a function defined over Θ×T and let hbe a function satisfying h(δ)→0asδ→0. There exists aδ0, such that for any δ∈(0, δ0],and any (θ, τ)such that ∥θ−ˆθ∥ ≤δand∥τ−ˆτ∥ ≤δ, then a point (ˆθ,ˆτ)is an ε-approximate local minimax point of U, if it satisfies: U˜θ(ˆθ, τ)−ε≤U˜θ(ˆθ,ˆτ)≤ max τ′:∥τ′−ˆτ∥≤h(δ)U˜θ(θ, τ′) +ε, (22) We aim to achieve approximate local minimax points for every client as a solution of the federated minimax optimization. With that, we characterize the federated solution as the following. Definition 3 (E-Approximate Federated Equilibrium Solutions) .LetE={εi}N i=1be the approximation error vector for clients [N]. LetUi ˜θ(θ, τ)be a function defined over Θ×T for a client i∈[N]. AnE-approximate federated equilibrium point (ˆθ,ˆτ)that is an εi-approximate local minimax point for every clients’ objective Ui ˜θ, where the federated objective isU˜θ(θ, τ) :=1 NPN i=1Ui ˜θ(θ, τ), must follow the conditions below: 1.εi- First-order Necessary Condition: The point (ˆθ,ˆτ)must be an εistationary point for every client i∈[N], i.e., ∥∇θUi ˜θ(ˆθ,ˆτ)∥ ≤εi,and ∥∇τUi ˜θ(ˆθ,ˆτ)∥ ≤εi. 2.Second-Order εiNecessary Condition: The point (ˆθ,ˆτ)must satisfy the second-order conditions: ∇2 ττUi ˜θ(ˆθ,ˆτ)⪯ −εiI, andh ∇2 θθUi ˜θ− ∇2 θτUi ˜θ ∇2 ττU˜θ−1∇2 τθUi ˜θi (ˆθ,ˆτ)⪰εiI. 7 Federated IV Analysis via Federated GMM, Geetika et al. 3.Second-Order εiSufficient Condition: Anεistationary point (θ, τ)that satisfies ∇2 ττUi ˜θ(ˆθ,ˆτ)≺ −εiI, and h ∇2 θθU˜θ− ∇2 θτU˜θ ∇2 ττU˜θ−1∇2 τθU˜θi (ˆθ,ˆτ)≻εiI guarantees that (ˆθ,ˆτ)is a strict local minimax point ∀i∈[N]that satisfies εiapproximate equilibrium as in definition 2. We now state the main theoretical result of our work in the following theorem. Theorem 1. Under assumptions 1, 2, 3 and | https://arxiv.org/abs/2505.21012v1 |
4, a minimax solution (ˆθ,ˆτ)of federated optimization problem (20) that satisfies the equilibrium condition as in definition 1: U˜θ(ˆθ, τ)≤U˜θ(ˆθ,ˆτ)≤ max τ′:∥τ′−ˆτ∥≤h(δ)U˜θ(θ, τ′), is anE-approximate federated equilibrium solution as defined in 3, where the approximation error εifor each client i∈[N]lies in: max{ζi θ, ζi τ} ≤εi≤min{α−ρi τ, β−Bi} for ρi τ < α and Bi> β , such that α := λmax ∇2 ττU˜θ(ˆθ,ˆτ) ,β := λminh ∇2 θθU˜θ− ∇2 θτU˜θ ∇2 ττU˜θ−1∇2 τθU˜θi (ˆθ,ˆτ) andBi:=ρi θ+Lρi θτ1 |λmax(∇2ττUi ˜θ)|+Lρi τθ1 |λmax(∇2ττUi ˜θ)|+ L2ρi τ1 |λmax(∇2ττUi ˜θ)·λmax(∇2ττU˜θ)|. The proof of theorem 1 is given in Appendix C.2. Note that when data is homogeneous (i.e., for each client i,ζi θ,ζi τ,ρi τ andBiare all zeroes), each client satisfies an exact local minimax equilibrium. Remark 1. In Theorem 1, note that if the interval [max{ζi θ, ζi τ},min{α−ρi τ, β−Bi}]is empty, i.e. max{ζi θ, ζi τ}> min{α−ρi τ, β−Bi}, then no such εiexists and (ˆθ,ˆτ)fails to be a local εiapproximate equilibrium point for that clients. It may happen in two cases: 1.The gradient dissimilarity ζi θ, ζi τis too large indicating high heterogeneity, then (ˆθ,ˆτ)- the solution to the federated objective would fail to become an approximate equilibrium point for the clients. It is a practical consideration for a federated convergence facing difficulty against high heterogeneity. 2.Ifα≈ρi τorβ≈Bi, indicating that the client’s local curvature structure significantly differs from the global curvature. In this case, the clients’ objectives may be flatter or even oppositely curved compared to the global model, that is, the objectives are highly heterogeneous. Now we state the result on the consistency of the estimator of the clients’ moment conditions. Theorem 2 (Consistency) .[Adaptation of Theorem 2 of (Bennett, Kallus, and Schnabel, 2019)] Let ˜θnbe a data- dependent choice for the federated objective that has a limit in probability. For each client i∈[N], define mi(θ, τ,˜θ) := fi(Zi;τ)(Yi−g(Xi;θ))−1 4fi(Zi;τ)2(Yi−g(Xi;˜θ))2,Mi(θ) = supτ∈TE[mi(θ, τ,˜θ)]and ηi(ϵ) := infd(θ,θ0)≥ϵMi(θ)−Mi(θ0)for every ϵ >0. Let (ˆθn,ˆτn)be a solution that satisfies the approximate equilibrium for each of the client i∈[N]as sup τ∈TUi ˜θ(ˆθn, τ)−εi−op(1)≤Ui ˜θ(ˆθn,ˆτn)≤inf θ∈Θmax τ′:∥τ′−ˆτn∥≤h(δ)Ui ˜θ(θ, τ′) +εi+op(1), for some δ0, such that for any δ∈(0, δ0],and any θ, τsuch that ∥θ−ˆθ∥ ≤δand∥τ−ˆτ∥ ≤δand a function h(δ)→0asδ→0. Then, under similar assumptions as in Assumptions 1 to 5 of (Bennett, Kallus, and Schnabel, 2019), the global solution ˆθnis a consistent estimator to the true parameter θ0, i.e. ˆθnp− →θ0when the approximate error εi<ηi(ϵ) 2for every ϵ >0for each client i∈[N]. The assumptions and the proof of Theorem 2 are included in Appendix C.3. Remark 2. Theorem 2 formalizes a tradeoff between data heterogeneity and the consistency of the global estimator in federated learning. If the approximation error εiis large for a client i∈[N], then the solution ˆθnmay fail to consistently estimate the true parameter of client i. In contrast, when data across clients have similar distribution (i.e., case for low heterogeneity), the federated optimal model ˆθnis consistent across clients. Now, we discuss that the limit points of FEDGDA will retrieve the local minimax points of the federated optimization problem. 8 Federated IV Analysis via Federated GMM, Geetika et | https://arxiv.org/abs/2505.21012v1 |
al. 4.2 Limit Points of F EDGDA Letα1=η γ, α2=ηbe the learning rates for gradient updates to θandτ, respectively. For details, refer to Algorithm 1 in Appendix B. Without loss of generality the F EDGDA updates are: θt+1=θt−η1 γ1 NX i∈[N]RX r=1∇θUi ˜θ(θi t,r, τi t,r)andτt+1=τt+η1 NX i∈[N]RX r=1∇τUi ˜θ(θi t,r, τi t,r) (23) We call it γ-FEDGDA , where γis the ratio of α1toα2. Asη→0corresponds to FEDGDA -flow, under the smoothness ofUi ˜θ, Assumption 3 and for some fixed R, FEDGDA-flow becomes: dθ dt=−1 γR∇θU˜θ(θ, τ) +OR γζθ ,anddτ dt=R∇τU˜θ(θ, τ) +O(Rζτ). (24) We further elaborate on F EDGDA-flow in Appendix D.1. Proposition 1. Given the Jacobian matrix for γ−FEDGDA flow as J=−1 γR∇2 θθU˜θ(θ, τ)−1 γR∇2 θτU˜θ(θ, τ) R∇2 τθU˜θ(θ, τ) R∇2 ττU˜θ(θ, τ) , a point (θ, τ)is a strictly linearly stable equilibrium of the γ−FEDGDA flow if and only if the real parts of all eigenvalues of Jare negative, i.e., Re(Λ j)<0for all j. Proposition 1 essentially defines a strictly linearly stable equilibrium of the γ−FEDGDA flow. The proof follows a strategy similar to (Jin, Netrapalli, and Jordan, 2020). With that, let γ-FGDA be the set of strictly linearly stable points of the γ-FEDGDA flow,LocMinimax be the set of local minimax points of the federated zero-sum game. Define ∞ − FGDA := lim sup γ→∞γ− FGDA :=∩γ0>0∪γ>γ 0γ− FGDA ,and ∞ − FGDA := lim inf γ→∞γ− FGDA :=∪γ0>0∩γ>γ 0γ− FGDA . We now state the theorem that establishes the stable limit points of ∞-FGDA as local minimax points, up to some degenerate cases. This theorem ensures that solutions to a minimax problem obtained using FEDGDA in the limit γ→ ∞ correspond to equilibrium points. Theorem 3. Under Assumption 1, LocMinimax ⊂ ∞ − FGDA ⊂∞ − FGDA ⊂ L ocMinimax ∪ A, where A:={(θ, τ)|(θ, τ)is stationary and ∇2 ττU˜θ(θ, τ)is degenerate }. Moreover, if the hessian ∇2 ττU˜θ(θ, τ)is smooth, thenAhas measure zero in Θ× T ⊂ Rd×Rk. Essentially, Theorem 3 states that the limit points of FEDGDA are the local minimax solutions, and thereby the equilibrium solution of the federated zero-sum game at the server, up to some degenerate cases with measure 0. The proof of Theorem 3 is included in Appendix D.2. Theorems 1, 2, and 3 together complete the theoretical foundation of the pipeline in our work. Obtaining the equilibrium solution of the federated zero-sum game at the server via the FEDGDA limit points, using Theorem 1 we get E- approximate federated equilibrium solutions, wherefrom we obtain clients’ approximate local minimax. Finally, applying Theorem 2 we retrieve the consistent estimators for GMM at the clients. 5 Experiments In the experiments, we extend the experimental evaluations of (Bennett, Kallus, and Schnabel, 2019) to a federated setting. We discuss this benchmark choice further in Appendix A. More specifically, we evaluate the ability of FEDGMM to fit low and high dimensional data to demonstrate that it converges analogous to the centralized algorithm DEEPGMM. Similar to (Bennett, Kallus, and Schnabel, 2019), we assess two scenarios in regards to ((X, Y), Z): (a)The instrumental and treatment variables ZandXare | https://arxiv.org/abs/2505.21012v1 |
both low-dimensional. In this case, we use 1- dimensional synthetic datasets corresponding to the following functions: (a) Absolute :g0(x) =|x|, (b) Step : g0(x) = 1 {x≥0}, (c)Linear :g0(x) =x. 9 Federated IV Analysis via Federated GMM, Geetika et al. To generate the synthetic data, similar to (Bennett, Kallus, and Schnabel, 2019; Lewis and Syrgkanis, 2018) we apply the following generation process: Y=g0(X) +e+δ andX=Z(1)+Z(2)+e+γ (25) (Z(1), Z(2))∼Uniform ([−3,3]2) ande∼ N(0,1), γ, δ ∼ N(0,0.1) (26) (b)ZandXare low-dimensional or high-dimensional or both. First, ZandXare generated as in (25,26). Then for high-dimensional data, we map ZandXto an image using the mapping: Image (x) =Dataset (round (min (max(1 .5x+ 5,0),9))), where (round (min(max(1 .5x+ 5,0),9))) returns an integer between 0 and 9. Essentially, the function Dataset (.)randomly selects an image following its index. We use datasets FEMNIST (Federated Extended MNIST) and CIFAR10 (Caldas et al., 2018) for images of size 28×28and3×32×32, respectively. Thus, we have the following cases: (a) Dataset z:X=Xlow, Z=Image (Zlow), (b)Dataset x:Z=Zlow, X=Image (Xlow), and (c) Dataset x,z:Z=Image (Zlow),X=Image (Xlow), where Dataset takes values FEMNIST and CIFAR10 and the superscript lowindicates the values generated using the process in low-dimensional case. (Bennett, Kallus, and Schnabel, 2019) used Optimistic Adam ( OA DAM ), a variant of Adam (Kingma, 2015) based stochastic gradient descent ascent algorithm (Daskalakis et al., 2018), which applies mirror descent based gradient updates. It guarantees the last iteration convergence of a GAN (Goodfellow et al., 2014) training problem. It is known that a well-tuned SGDoutperforms Adam in over-parametrized settings (Wilson et al., 2017), closely resembling ourFEDGMM implementation, where the size of neural networks often exceeds the data available on the clients. Considering that, we explored the comparative performance of GDA andSGDA against OA DAM for a centralized DEEPGMM implementation. Note that GDA also aligns with the analytical discussion presented in Section (4). We then implemented the federated versions of each of these methods and benchmarked them for solving the federated minimax optimization problem for the FEDDEEPGMM algorithm. For high-dimensional scenarios, we implement a Actual Causal Effect DeepGMM-OAdam DeepGMM-SGDA FedDeepGMM-SGDA DeepGMM-GDA FedDeepGMM-GDA (a)Absolute (b)Step (c)Linear Figure 1: Estimated ˆgcompared to true gin low-dimensional scenarios convolutional neural network (CNN) architecture to process images, while for low-dimensional scenarios, we use a multilayer perceptron (MLP). Code is available at https://github.com/dcll-iiitd/FederatedDeepGMM . Estimations DEEPGMM - OAdamDEEPGMM - GDAFDEEPGMM - GDADEEPGMM - SGDAFDEEPGMM - SGDA Absolute 0.03±0.01 0.013±.01 0.4±0.01 0.009±0.01 0.2±0.00 Step 0.3±0.00 0.03±0.00 0.04±0.01 0.112±0.00 0.23±0.01 Linear 0.01±0.00 0.02±0.00 0.01±0.00 0.03±0.00 0.04±0.00 FEMNIST x 0.50±0.00 1.11±0.01 0.21±0.02 0.40±0.01 0.19±0.01 FEMNIST x,z0.24±0.00 0.46±0.09 0.19±0.03 0.14±0.02 0.20±0.00 FEMNIST z 0.10±0.00 0.42±0.01 0.24±0.01 0.11±0.02 0.23±0.01 CIFAR10 x 0.55±0.30 0.19±0.01 0.25±0.03 0.20±0.08 0.22±0.08 CIFAR10 x,z 0.40±0.11 0.24±0.00 0.24±0.03 0.19±0.03 0.22±0.02 CIFAR10 z 0.13±0.03 0.13±0.01 1.70±2.60 0.24±0.01 0.52±0.60 Table 1: The averaged Test MSE with standard deviation on the low- and high-dimensional scenarios. Non-i.i.d. data. We sample the train, test and validation sets similar to (Bennett, Kallus, and Schnabel, 2019). For the low-dimensional scenario, we sample n= 20000 points for each train, validation, and test set, while, for the 10 Federated IV Analysis via Federated GMM, Geetika et al. | https://arxiv.org/abs/2505.21012v1 |
high-dimensional scenario, we have n= 20000 for the train set and n= 10000 for the validation and test set. To set up a non-i.i.d. distribution of data between clients, samples were divided amongst the clients using a Dirichlet distribution DirS(α)(Wang et al., 2019), where αdetermines the degree of heterogeneity across Sclients. We used DirS(α) = 0 .3for each train, test, and validation samples. Hyperparameters. We perform extensive grid-search to tune the learning rate. For FEDSGDA , we use a minibatch-size of 256. To avoid numerical instability, we standardize the observed Yvalues by removing the mean and scaling to unit variance. We perform five runs of each experiment and present the mean and standard deviation of the results. Observations and Discussion. In figure (1), we first observe that SGDA andGDA algorithms perform at par with OA DAM to fit the DEEPGMM estimator. It establishes that hyperparameter tuning is effective. With that, we further observe that the federated algorithms efficiently fit the estimated function to the true data-generating process competitive to the centralized algorithms even though the data is decentralized and non-i.i.d.. Thus, it shows that the federated algorithm converges effectively. In Table 1 we present the test mean squared error (MSE) values. The MSE values indicate that the federated implementation achieves competitive convergence to their centralized counterpart. These experiments establish the efficacy of our method. An Open Problem In this work, we characterized the equilibrium solutions of federated zero-sum games in consideration of local minimax solutions for non-convex non-concave minimax optimization problems. Regardless of the analytical assumptions over the objective, the mixed strategy solutions for zero-sum games exist. However, unlike the pure strategy solutions, where the standard heterogeneity considerations over gradients and Hessians across clients, translates a local minimax solution for the federated objective to approximate local solutions for the clients, it is not immediate how a mixed strategy solution as a probability measure can be translated to that for clients. It leaves an interesting open problem to characterize the mixed startegy solutions for federated zero-sum games. References Almodóvar, Alejandro, Juan Parras, and Santiago Zazo (2024). “Propensity Weighted federated learning for treatment effect estimation in distributed imbalanced environments”. In: Computers in Biology and Medicine 178, p. 108779 (cit. on p. 3). Angrist, Joshua D and Alan B Krueger (2001). “Instrumental variables and the search for identification: From supply and demand to natural experiments”. In: Journal of Economic perspectives 15.4, pp. 69–85 (cit. on p. 2). Angrist, Joshua D and Jörn-Steffen Pischke (2009). Mostly harmless econometrics: An empiricist’s companion . Princeton university press (cit. on p. 2). Antunes, Rodolfo Stoffel et al. (2022). “Federated learning for healthcare: Systematic review and architecture proposal”. In:ACM Transactions on Intelligent Systems and Technology (TIST) 13.4, pp. 1–23 (cit. on p. 1). Bennett, Andrew, Nathan Kallus, and Tobias Schnabel (2019). “Deep generalized method of moments for instrumental variable analysis”. In: Advances in neural information processing systems 32 (cit. on pp. 2–6, 8–10, 16, 22). Byrd, David and Antigoni Polychroniadou (2020). “Differentially private secure multi-party computation for federated learning in financial applications”. In: Proceedings of the First ACM International Conference | https://arxiv.org/abs/2505.21012v1 |
on AI in Finance , pp. 1–9 (cit. on p. 1). Caldas, Sebastian et al. (2018). “Leaf: A benchmark for federated settings”. In: arXiv preprint arXiv:1812.01097 (cit. on p. 10). Charles, Zachary and Dimitris Papailiopoulos (2018). “Stability and generalization of learning algorithms that converge to global optima”. In: International conference on machine learning . PMLR, pp. 745–754 (cit. on p. 6). Chatterjee, Bapi, Vyacheslav Kungurtsev, and Dan Alistarh (2024). “Federated SGD with Local Asynchrony”. In: 2024 IEEE 44th International Conference on Distributed Computing Systems (ICDCS) . IEEE, pp. 857–868 (cit. on p. 3). Daskalakis, Constantinos et al. (2018). “Training GANs with Optimism”. In: International Conference on Learning Representations (cit. on p. 10). 11 Federated IV Analysis via Federated GMM, Geetika et al. Dayan, Ittai et al. (2021). “Federated learning for predicting clinical outcomes in patients with COVID-19”. In: Nature medicine 27.10, pp. 1735–1743 (cit. on pp. 1, 2). Deng, Yuyang and Mehrdad Mahdavi (2021). “Local stochastic gradient descent ascent: Convergence analysis and communication efficiency”. In: International Conference on Artificial Intelligence and Statistics . PMLR, pp. 1387– 1395 (cit. on pp. 6, 17). Gecer, Melike and Benoit Garbinato (2024). “Federated Learning for Mobility Applications”. In: ACM Computing Surveys 56.5, pp. 1–28 (cit. on p. 1). Goodfellow, Ian et al. (2014). “Generative adversarial nets”. In: Advances in neural information processing systems 27 (cit. on p. 10). Hansen, Lars Peter (1982). “Large sample properties of generalized method of moments estimators”. In: Econometrica: Journal of the econometric society , pp. 1029–1054 (cit. on pp. 2, 4). Hartford, Jason et al. (2017). “Deep IV: A flexible approach for counterfactual prediction”. In: International Conference on Machine Learning . PMLR, pp. 1414–1423 (cit. on p. 2). Hill, Jennifer L (2011). “Bayesian nonparametric modeling for causal inference”. In: Journal of Computational and Graphical Statistics 20.1, pp. 217–240 (cit. on p. 16). Horn, Roger A. and Charles R. Johnson (2012). Matrix Analysis . 2nd ed. Cambridge University Press (cit. on p. 20). Imai, Kosuke and Michael Lingzhi Li (2023). “Experimental evaluation of individualized treatment rules”. In: Journal of the American Statistical Association 118.541, pp. 242–256 (cit. on p. 2). Izcovich, Ariel et al. (2020). “Prognostic factors for severity and mortality in patients infected with COVID-19: A systematic review”. In: PloS one 15.11, e0241955 (cit. on p. 1). Jin, Chi, Praneeth Netrapalli, and Michael Jordan (July 2020). “What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?” In: Proceedings of the 37th International Conference on Machine Learning . Ed. by Hal Daumé III and Aarti Singh. V ol. 119. Proceedings of Machine Learning Research. PMLR, pp. 4880–4889. URL: https://proceedings.mlr.press/v119/jin20e.html (cit. on pp. 6, 7, 9, 28). Kairouz, Peter et al. (2021). “Advances and open problems in federated learning”. In: Foundations and trends ®in machine learning 14.1–2, pp. 1–210 (cit. on p. 1). Kallus, Nathan (2020a). “Deepmatch: Balancing deep covariate representations for causal inference using adversarial training”. In: International Conference on Machine Learning . PMLR, pp. 5067–5077 (cit. on p. 2). – (2020b). “Generalized optimal matching methods for causal inference”. In: Journal of Machine Learning Research 21.62, pp. 1–54 (cit. on p. 2). Karimireddy, Sai | https://arxiv.org/abs/2505.21012v1 |
Praneeth et al. (2020). “Scaffold: Stochastic controlled averaging for federated learning”. In: Interna- tional conference on machine learning . PMLR, pp. 5132–5143 (cit. on p. 3). Kingma, Diederik P (2015). “Adam: A method for stochastic optimization”. In: ICLR (cit. on p. 10). Kingma, Diederik P, Max Welling, et al. (2019). “An introduction to variational autoencoders”. In: Foundations and Trends® in Machine Learning 12.4, pp. 307–392 (cit. on p. 3). Kohavi, Ron et al. (2013). “Online controlled experiments at large scale”. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining , pp. 1168–1176 (cit. on p. 2). Latchman, David S (1993). “Transcription factors: an overview.” In: International journal of experimental pathology 74.5, p. 417 (cit. on p. 2). Lewis, Greg and Vasilis Syrgkanis (2018). Adversarial Generalized Method of Moments . arXiv: 1803 . 07164 [econ.EM] .URL:https://arxiv.org/abs/1803.07164 (cit. on pp. 2, 10). 12 Federated IV Analysis via Federated GMM, Geetika et al. Li, Tian et al. (2020). “Federated optimization in heterogeneous networks”. In: Proceedings of Machine learning and systems 2, pp. 429–450 (cit. on p. 3). Liang, Yafen et al. (2023). “Interplay of hypoxia-inducible factors and oxygen therapy in cardiovascular medicine”. In: Nature Reviews Cardiology 20.11, pp. 723–737 (cit. on p. 2). Liu, Chaoyue, Libin Zhu, and Mikhail Belkin (2022). “Loss landscapes and optimization in over-parameterized non- linear systems and neural networks”. In: Applied and Computational Harmonic Analysis 59, pp. 85–116 (cit. on p. 6). Long, Guodong et al. (2020). “Federated learning for open banking”. In: Federated learning: privacy and incentive . Springer, pp. 240–254 (cit. on p. 1). Louizos, Christos et al. (2017). “Causal effect inference with deep latent-variable models”. In: Advances in neural information processing systems 30 (cit. on p. 16). McMahan, Brendan et al. (2017). “Communication-efficient learning of deep networks from decentralized data”. In: Artificial intelligence and statistics . PMLR, pp. 1273–1282 (cit. on p. 1). Nguyen, Dinh C et al. (2022). “Federated learning for smart healthcare: A survey”. In: ACM Computing Surveys (Csur) 55.3, pp. 1–37 (cit. on p. 1). Noble, Maxence, Aurélien Bellet, and Aymeric Dieuleveut (2022). “Differentially private federated learning on heterogeneous data”. In: International Conference on Artificial Intelligence and Statistics . PMLR, pp. 10110–10145 (cit. on p. 3). Oh, Wonsuk and Girish N Nadkarni (2023). “Federated learning in health care using structured medical data”. In: Advances in kidney disease and health 30.1, pp. 4–16 (cit. on p. 1). Oldenhof, Martijn et al. (2023). “Industry-scale orchestrated federated learning for drug discovery”. In: Proceedings of the aaai conference on artificial intelligence . V ol. 37. 13, pp. 15576–15584 (cit. on pp. 1, 2). Pearl, Judea (2009). “Causal inference in statistics: An overview”. In: (cit. on p. 3). Reddi, Sashank et al. (2020). “Adaptive federated optimization”. In: arXiv preprint arXiv:2003.00295 (cit. on p. 3). Reiersøl, Olav (1945). “Confluence analysis by means of instrumental sets of variables”. PhD thesis. Almqvist & Wiksell (cit. on p. 2). Russell, Clark D, Nazir I Lone, and J Kenneth Baillie (2023). “Comorbidities, multimorbidity and COVID-19”. In: Nature medicine 29.2, pp. 334–343 (cit. on p. 1). Shalit, Uri, Fredrik D | https://arxiv.org/abs/2505.21012v1 |
Johansson, and David Sontag (2017). “Estimating individual treatment effect: generalization bounds and algorithms”. In: International conference on machine learning . PMLR, pp. 3076–3085 (cit. on pp. 2, 16). Sharma, Pranay et al. (2022). “Federated minimax optimization: Improved convergence analyses and algorithms”. In: International Conference on Machine Learning . PMLR, pp. 19683–19730 (cit. on pp. 2, 3, 6, 17). Shen, Wei et al. (2024). “Stochastic Smoothed Gradient Descent Ascent for Federated Minimax Optimization”. In: International Conference on Artificial Intelligence and Statistics . PMLR, pp. 3988–3996 (cit. on pp. 3, 6). Steif, Alison Etheridge et al. (2014). “Nobel Prize in Economics”. In: IMS Bulletin 43.1 (cit. on p. 2). T Dinh, Canh, Nguyen Tran, and Josh Nguyen (2020). “Personalized federated learning with moreau envelopes”. In: Advances in neural information processing systems 33, pp. 21394–21405 (cit. on p. 3). V o, Thanh Vinh et al. (2022a). “An adaptive kernel approach to federated learning of heterogeneous causal effects”. In: Advances in Neural Information Processing Systems 35, pp. 24459–24473 (cit. on pp. 3, 16). V o, Thanh Vinh et al. (2022b). “Bayesian federated estimation of causal effects from observational data”. In: Uncertainty in Artificial Intelligence . PMLR, pp. 2024–2034 (cit. on pp. 3, 16). 13 Federated IV Analysis via Federated GMM, Geetika et al. Wang, Hongyi et al. (2019). “Federated Learning with Matched Averaging”. In: International Conference on Learning Representations (cit. on p. 11). Wang, Jingzhou, Toshiro Sato, and Atsushi Sakuraba (2021). “Worldwide association of lifestyle-related factors and COVID-19 mortality”. In: Annals of medicine 53.1, pp. 1531–1536 (cit. on p. 1). Wilson, Ashia C et al. (2017). “The marginal value of adaptive gradient methods in machine learning”. In: Advances in neural information processing systems 30 (cit. on p. 10). Wooldridge, Jeffrey M (2001). “Applications of generalized method of moments estimation”. In: Journal of Economic perspectives 15.4, pp. 87–100 (cit. on pp. 2, 3). Wright, Philip Green (1928). The tariff on animal and vegetable oils . 26. Macmillan (cit. on p. 2). Wu, Xidong et al. (2024). “Solving a class of non-convex minimax optimization in federated learning”. In: Advances in Neural Information Processing Systems 36 (cit. on pp. 2, 3, 6). Wu, Yue et al. (2023). “Personalized federated learning under mixture of distributions”. In: International Conference on Machine Learning . PMLR, pp. 37860–37879 (cit. on p. 3). Xiong, Ruoxuan et al. (2023). “Federated causal inference in heterogeneous observational data”. In: Statistics in Medicine 42.24, pp. 4418–4439 (cit. on p. 3). Ye, Mang et al. (2023). “Heterogeneous federated learning: State-of-the-art and research challenges”. In: ACM Computing Surveys 56.3, pp. 1–44 (cit. on p. 1). Zedek, Mishael (1965). “Continuity and Location of Zeros of Linear Combinations of Polynomials”. In: Proceedings of the American Mathematical Society 16.1, pp. 78–84. ISSN : 00029939, 10886826. URL:http://www.jstor.org/ stable/2034005 (visited on 02/10/2025) (cit. on p. 28). Zhang, Weijia, Lin Liu, and Jiuyong Li (2021). “Treatment effect estimation with disentangled latent factors”. In: Proceedings of the AAAI Conference on Artificial Intelligence . V ol. 35. 12, pp. 10923–10930 (cit. on pp. 3, 16). Zheng, Zhaohua et al. (2022). “Applications of federated learning in smart cities: recent | https://arxiv.org/abs/2505.21012v1 |
advances, taxonomy, and open challenges”. In: Connection Science 34.1, pp. 1–28 (cit. on p. 1). Zhu, Miaoxi et al. (2024). “Stability and generalization of the decentralized stochastic gradient descent ascent algorithm”. In:Advances in Neural Information Processing Systems 36 (cit. on p. 3). 14 Federated IV Analysis via Federated GMM, Geetika et al. APPENDIX A The Experimental Benchmark Design 16 B Federated Gradient Descent Ascent Algorithm Description 16 C Proofs 17 C.1 Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C.2 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C.3 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 C.3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 C.3.2 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 D Limit Points of F EDGDA 25 D.1 F EDGDA Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 D.2 Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 15 Federated IV Analysis via Federated GMM, Geetika et al. A The Experimental Benchmark Design It is standard in this area to perform experimental analysis on synthetic datasets for unavailability of ground truth for causal inference; for example see Section 4.1.1 of V o et al. (2022b). Nonetheless, an experimental comparison of our work with recent works on federated methods for causal effect estimations is not direct. More specifically, see | https://arxiv.org/abs/2505.21012v1 |
the following: (i)CAUSAL RFF (Vo et al., 2022a) and FEDCI(Vo et al., 2022b). The aim of CAUSAL RFF (V o et al., 2022a) is to estimate the conditional average treatment effect (CATE) and average treatment effect (ATE), whereas FEDCI(V o et al., 2022b) aims to estimate individual treatment effect (ITE) and ATE. For this, (V o et al., 2022a) consider a setting of Y,W, andXto be random variables denoting the outcome, treatment, and proxy variable, respectively. Along with that, they also consider a confounding variable Z. However, their causal dependency builds on the dependence of each of Y,W, and XonZbesides dependency of YonW. Consequently, to compute CATE and ATE, they need to estimate the conditional probabilities p(wi|xi),p(yi|xi, wi),p(zi|xi, yi, wi),p(yi|wi, zi), where the superscript irepresents a client. Their experiments compare the estimates of CATE and ATE with the Bayesian baselines (Hill, 2011), (Shalit, Johansson, and Sontag, 2017), (Louizos et al., 2017), etc. in a centralized setting without any consideration of data decentralization or heterogeneity native to federated learning. Further, they compare against the same baselines in a one-shot federated setting, where at the end of training on separate data sources independently, the predicted treatment effects are averaged. Similar is the experimental evaluation of (V o et al., 2022b). By contrast, the setting of IV analysis as in our work does not consider dependency of the outcome variable Yon the confounder Z, though the treatment variable Xcould be endogenous and depend on Z. For us, computing the treatment effects and thereby comparing it against these works is not direct. Furthermore, it is unclear, if the approach of (V o et al., 2022a) and (V o et al., 2022b), where the predicted inference over a number of datasets is averaged as the final result, would be comparable to our approach where the problem is solved using a federated maximin optimization with multiple synchronization rounds among the clients. For us, the federated optimization subsumes the experimental of comparing the average predicted values after independent training with the predicted value over the entire data. This is the reason that our centralized counterpart i.e. DEEPGMM (Bennett, Kallus, and Schnabel, 2019), do not experimentally compare against the baselines of (V o et al., 2022a) and (V o et al., 2022b). In summary, for us the experimental benchmarks were guided by showing the efficient fit of the GMM estimator in a federated setting. (ii)TEDVAE (Zhang, Liu, and Li, 2021). As mentioned above, their aim was to showcase the advantage of a weighted averaging over the vanilla averaging of FedAvg. By contrast, our experiments tried to showcase that even in a federated setting, the maximin optimization converges analogous to the centralized counterpart. B Federated Gradient Descent Ascent Algorithm Description Algorithm 1 FEDGDA running on a federated learning server to solve the minimax problem (20) Server Input : initial global estimate θ1, τ1; constant local learning rate α1, α2; total Nclients Output : global model states θT+1, τT+1 1:forsynchronization round t= 1, . . . , T do 2: server sends θt, τtto all clients 3: foreachi∈[N]in parallel do 4: θi | https://arxiv.org/abs/2505.21012v1 |
t,1←θt,τi t,1←τt 5: forr= 1,2, . . . , R do 6: θi t,r+1=θi t,r−α1∇θfi(θi t,r, τi t,r) 7: τi t,r+1=τi t,r+α2∇τfi(θi t,r, τi t,r) 8: end for 9: (∆θi t,∆τt)←(θi t,R+1−θt, τi t,R+1−τt) 10: end for 11: (∆θt,∆τt)←1 NP i∈[N](∆θi t,∆τi t) 12: θt+1←(θt+ ∆θt),τt+1←(τt+ ∆τt) 13:end for 14:return θT+1;τT+1 16 Federated IV Analysis via Federated GMM, Geetika et al. We adapt the proof of Theorem 1 in (Sharma et al., 2022) for the SGDA algorithm proposed in (Deng and Mahdavi, 2021) for the F EDGDA algorithm 1 for smooth non-convex- PL problems. Assumption 5 (Polyak Łojaisiewicz (PL) condition in τ).The function U˜θsatisfyies µ−PLcondition in τ,µ >0, if for any fixed θ,arg maxτ′U˜θ(θ, τ′)̸=ϕand∥∇τU˜θ(θ, τ)∥2≥2µ max τ′U˜θ(θ, τ′)−U˜θ(θ, τ) . Theorem 4. Let the local loss functions Ui ˜θfor all i∈ {1,2, . . . , N }satisfy assumption 2 and 3. The federated objective function satisfies assumption 5. Suppose α2≤1 8LR,α1 α2≤1 8κ2,where κ=L µis the condition number. Let ¯θT+1is drawn uniformly at random from {θt}T+1 t=1, then the following holds: ∥∇˜Φ(¯θT+1)∥2≤ O κ2∆˜Φ α2R(T+ 1) +O κ2(R−1)2[α2 2ζ2 τ+α2 1ζ2 θ] , where ∇˜Φ(.) := max τU˜θ(., τ)is the envelope function, ∆˜Φ:=˜Φ(θ0)−minθ˜Φ(θ),andζθ:=1 NPN i=1ζi θ, ζτ:= 1 NPN i=1ζi τ. Using α1=O 1 κ2q N R(T+1) ,α2=Oq N R(T+1) ,∥∇˜Φ(¯θT+1)∥2can be bounded as O κ2∆˜Φp NR(T+ 1)+κ2(R−1)2NR(ζ2 θ+ζ2 τ) R(T+ 1)! . Although the original assumption uses the supremum of average squared deviations, say ζ′ θandζ′ τ, we use per-client dissimilarity bounds ζi θ, ζi τand upper bound their quantity as ζ′ θ2≤1 NPN i=1(ζi θ)2:=ζθ2andζ′ τ2≤1 NPN i=1(ζi τ)2:= ζτ2. Since there is no stochasticity, we used the bounded variance σ= 0. For details, refer to proof of Theorem 1 in (Sharma et al., 2022). C Proofs C.1 Proof of Lemma 2 Lemma 4 (Restatement of Lemma 2) .LetF=span{fi j|i∈[N], j∈[m]}. An equivalent objective function for the federated moment estimation optimization problem (18) is given by: ∥ψN(f;θ)∥2 ˜θ= sup fi∈F ∀i∈[N]1 NNX i=1 ψni(fi;θ)−1 4C˜θ(fi;fi) ,where (27) ψni(fi;θ) :=1 niniX k=1fi(Zi k)(Yi k−gi(Xi k;θ)),andC˜θ(fi, fi) :=1 niniX k=1(fi(Zi k))2(Yi k−gi(Xi k;˜θ))2. Proof. Letψ= (1 NPN i=1ψni(fi 1;θ),1 NPN i=1ψni(fi 2;θ), . . . ,1 NPN i=1ψni(fi m;θ)). We know that ∥v∥2=v⊤C−1 ˜θvand the associated dual norm is obtained as ∥v∥∗= sup∥v∥≤1v⊤v=v⊤C˜θv. Using the definition of the dual norm, ∥ψ∥= sup ∥v∥∗≤1v⊤ψ ∥ψ∥2= sup ∥v∥∗≤∥ψ∥v⊤ψ ∥ψ∥2= sup v⊤C˜θv≤∥ψ∥2v⊤ψ. (28) We now find the equivalent dual optimization problem for (28). The Lagrangian of the constrained maximization problem (28) is given as L(v, λ) =v⊤ψ+λ(v⊤C˜θv− ∥ψ∥2),where λ≤0. To maximize L(v, λ)w.r.t. v, put∂L ∂v=ψ+ 2λC˜θv= 0to obtain v=−1 2λC−1 ˜θψ. 17 Federated IV Analysis via Federated GMM, Geetika et al. When ∥ψ∥>0,v= 0satisfies the Slater’s condition as a strictly feasible interior point of the constraint v⊤C˜θv− ∥ψ∥2≤0. Thus, strong duality holds. Substituting v=−1 2λC−1 ˜θψin the Lagrangian gives L∗(λ) =−1 2λψ⊤C−1 ˜θψ+1 4λψ⊤C−1 ˜θψ−λ∥ψ∥2 =−∥ψ∥2 4λ−λ∥ψ∥2. Hence, the dual becomes ∥ψ∥2=infλ<0{L∗(λ)}. Thus, the equivalent dual optimization problem for (28) is given as ∥ψ∥2= inf λ<0 −∥ψ∥2 4λ−λ∥ψ∥2 . (29) Putting∂L ∂λ=∥ψ∥2 4λ2− ∥ψ∥2= 0gives λ=−1 2.Thus, due to strong duality ∥ψ∥2= | https://arxiv.org/abs/2505.21012v1 |
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 14