akanatas commited on
Commit
0a16afd
·
verified ·
1 Parent(s): 16127ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -16,7 +16,7 @@ base_model:
16
  ---
17
 
18
  # CultureMERT: Continual Pre-Training for Cross-Cultural Music Representation Learning
19
- 📑 [**Read the full paper (to be presented at ISMIR 2025)**](...TODO)
20
 
21
  **CultureMERT-TA-95M** is a 95M-parameter music foundation model adapted to diverse musical cultures through [**task arithmetic**](https://arxiv.org/abs/2212.04089). Instead of direct continual pre-training on a multi-cultural mixture, as in [CultureMERT-95M](https://huggingface.co/ntua-slp/CultureMERT-95M), this model merges multiple **single-culture adapted** variants of [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M)—each continually pre-trained via our two-stage strategy on a distinct musical tradition:
22
 
@@ -125,7 +125,15 @@ This model is released under a non-commercial CC BY-NC 4.0 license and is intend
125
  # 📚 Citation
126
 
127
  ```shell
128
- ...TODO
 
 
 
 
 
 
 
 
129
  ```
130
 
131
  ---
 
16
  ---
17
 
18
  # CultureMERT: Continual Pre-Training for Cross-Cultural Music Representation Learning
19
+ 📑 [**Read the full paper (to be presented at ISMIR 2025)**](https://arxiv.org/abs/2506.17818)
20
 
21
  **CultureMERT-TA-95M** is a 95M-parameter music foundation model adapted to diverse musical cultures through [**task arithmetic**](https://arxiv.org/abs/2212.04089). Instead of direct continual pre-training on a multi-cultural mixture, as in [CultureMERT-95M](https://huggingface.co/ntua-slp/CultureMERT-95M), this model merges multiple **single-culture adapted** variants of [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M)—each continually pre-trained via our two-stage strategy on a distinct musical tradition:
22
 
 
125
  # 📚 Citation
126
 
127
  ```shell
128
+ @misc{kanatas2025culturemertcontinualpretrainingcrosscultural,
129
+ title={CultureMERT: Continual Pre-Training for Cross-Cultural Music Representation Learning},
130
+ author={Angelos-Nikolaos Kanatas and Charilaos Papaioannou and Alexandros Potamianos},
131
+ year={2025},
132
+ eprint={2506.17818},
133
+ archivePrefix={arXiv},
134
+ primaryClass={cs.SD},
135
+ url={https://arxiv.org/abs/2506.17818},
136
+ }
137
  ```
138
 
139
  ---