Add comprehensive model card for OMAR-RQ

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +188 -0
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ library_name: omar_rq
4
+ pipeline_tag: audio-classification
5
+ tags:
6
+ - audio
7
+ - music
8
+ - self-supervised-learning
9
+ - masked-language-modeling
10
+ ---
11
+
12
+ # OMAR-RQ: Open Music Audio Representation Model Trained with Multi-Feature Masked Token Prediction
13
+
14
+ This repository contains the model weights and code for **OMAR-RQ**, an Open Music Audio Representation Model trained with Multi-Feature Masked Token Prediction. It was introduced in the paper [OMAR-RQ: Open Music Audio Representation Model Trained with Multi-Feature Masked Token Prediction](https://huggingface.co/papers/2507.03482).
15
+
16
+ OMAR-RQ is developed to advance research in music audio understanding and provide powerful, multipurpose representations for music information retrieval.
17
+
18
+ [📄 Paper](https://huggingface.co/papers/2507.03482) | [💻 Code](https://github.com/MTG/OMAR-RQ)
19
+
20
+ ## Abstract
21
+ Developing open-source foundation models is essential for advancing research in music audio understanding and ensuring access to powerful, multipurpose representations for music information retrieval. We present OMAR-RQ, a model trained with self-supervision via masked token classification methodologies using a large-scale dataset with over 330,000 hours of music audio. We experiment with different input features and quantization options, and achieve state-of-the-art performance in music tagging, pitch estimation, chord recognition, beat tracking, segmentation, and difficulty estimation among open self-supervised models. We open-source our training and evaluation pipelines and model weights, available at this https URL .
22
+
23
+ ## Installation
24
+
25
+ For embedding extraction or fine-tuning:
26
+
27
+ ```bash
28
+ pip install .
29
+ ```
30
+
31
+ For development including pre-training your own models:
32
+
33
+ ```bash
34
+ pip install -e .[train]
35
+ ```
36
+
37
+ ## Inference
38
+
39
+ Load a model by specifying its Hugging Face model ID:
40
+
41
+ ```python
42
+ import torch
43
+ from omar_rq import get_model
44
+
45
+ # Embedding extraction example
46
+ x = torch.randn(1, 16000 * 4).cpu()
47
+
48
+ model_id = "mtg-upf/omar-rq-multifeature-25hz-fsq"
49
+ model = get_model(model_id=model_id, device="cpu")
50
+
51
+ embeddings = model.extract_embeddings(x, layers=[6])
52
+
53
+ timestamps = torch.arange(embeddings.shape[2]) / model.eps
54
+ ```
55
+
56
+ `get_model` reference:
57
+
58
+ ```
59
+ Returns an OMAR-RQ Module from the provided model_id or config_file.
60
+
61
+ Args:
62
+ model_id (str): Hugging Face's Model ID or local path to the model
63
+ config_file (Path): Path to the model config of a trained model.
64
+ device (str): Device to use for the model. Defaults to "cpu".
65
+ quantization_targets (bool): If True, it will create the quantization
66
+ targets for SSL pre-training of the model. Defaults to False.
67
+
68
+ Output:
69
+ module: The model from the provided config file.
70
+
71
+
72
+ Module usage:
73
+
74
+ Args:
75
+ audio (torch.Tensor): 2D mono audio tensor (B, T'). Where B is
76
+ the batch size and T' is the number of samples.
77
+ layers (set): Set of layer indices to extract embeddings from.
78
+ By default, it extracts embeddings from the last layer (logits).
79
+
80
+ Output:
81
+ torch.Tensor: Extracted embeddings. The output tensor has shape
82
+ (L, B, T, C,) where L = len(layers), B is the batch size, T is
83
+ the number of output timestamps, and C = embedding dimension.
84
+
85
+
86
+ Example:
87
+
88
+ >>> x = torch.randn(1, 16000 * 4).cpu()
89
+ >>>
90
+ >>> model = get_model(config_file, device="cpu")
91
+ >>>
92
+ >>> embeddings = model.extract_embeddings(x, layers=(6))
93
+ >>>
94
+ >>> # use the `eps` field to compute timestamps
95
+ >>> timestamps = torch.arange(embeddings.shape[2]) / model.eps
96
+
97
+
98
+
99
+ >> NOTE: The model's embedding rate depends on the model's configuration.
100
+ For example, the melspectrogram model has an embedding rate of 16ms.
101
+ audio should be a sequence with a sample rate as inditacted in the
102
+ config file and up to 30s.
103
+ ```
104
+
105
+ `extract_embeddings` reference:
106
+
107
+ ```
108
+ Extract embeddings from an input audio batch.
109
+
110
+ Args:
111
+ audio (torch.Tensor): 2D mono audio tensor (B, T'). Where B is
112
+ the batch size and T' is the number of samples.
113
+ layers (set): Set of layer indices to extract embeddings from.
114
+ By default, it extracts embeddings from the last layer (logits).
115
+
116
+ Output:
117
+ torch.Tensor: Extracted embeddings. The output tensor has shape
118
+ (L, B, T, C,) where L = len(layers), B is the batch size, T is
119
+ the number of output timestamps, and C = embedding dimension.
120
+ ```
121
+
122
+ ## Available models
123
+
124
+ | Model | Input | Rate | Tagging | Difficulty | Pitch | Chord | Beat | Structure |
125
+ |---|---|---|---|---|---|---|---|---|
126
+ | | | Hz | _mAP_ | _MSE_ | _acc._ | _acc._ | _F1_ | _acc._ |
127
+ | **base** | mel | 15.63 | .482 | **1.65** | .892 | .657 | .783 | **.647** |
128
+ | **multicodebook** | mel | 15.63 | **.488** | 1.66 | .897 | .675 | .775 | .639 |
129
+ | **multifeature** | audio | 18.75 | .467 | 1.76 | .938 | .734 | .833 | .623 |
130
+ | **multifeature-25hz** | audio | 25 | .463 | 1.79 | .932 | .728 | .848 | .628 |
131
+ | **multifeature-25hz-fsq**| audio | 25 | .463 | 1.71 | **.940**| **.749**| **.855** | .628 |
132
+
133
+ OMAR-RQ models are offered in different configurations, each with its own strengths and weaknesses. Models based on mel spectrogram (**base** and **multicodebook**) tend to perform better on semantic tasks such as auto-tagging, structure recognition, and difficulty estimation. On the other hand, **multifeature-25hz-fsq** offers the best performance in tonal and temporal tasks such as pitch and chord estimation, and beat tracking.
134
+
135
+ ### Hugging Face Model IDs
136
+
137
+ - [mtg-upf/omar-rq-base](https://huggingface.co/mtg-upf/omar-rq-base)
138
+ - [mtg-upf/omar-rq-multicodebook](https://huggingface.co/mtg-upf/omar-rq-multicodebook)
139
+ - [mtg-upf/omar-rq-multifeature](https://huggingface.co/mtg-upf/omar-rq-multifeature)
140
+ - [mtg-upf/omar-rq-multifeature-25hz](https://huggingface.co/mtg-upf/omar-rq-multifeature-25hz)
141
+ - [mtg-upf/omar-rq-multifeature-25hz-fsq](https://huggingface.co/mtg-upf/omar-rq-multifeature-25hz-fsq)
142
+
143
+ ## Pre-training OMAR-RQ models
144
+
145
+ 1. Install development dependencies:
146
+
147
+ ```bash
148
+ pip install -e .[train]
149
+ ```
150
+
151
+ 2. Prepare the experiment data
152
+
153
+ We downsample our data to 16 kHz mono and store it as 16-bit raw bytes ([numpy memmap](https://numpy.org/doc/stable/reference/generated/numpy.memmap.html) files). Check our [data preparation scripts](data/).
154
+
155
+ 3. Configuration
156
+
157
+ Our experiment configuration is controlled with [gin-config](https://github.com/google/gin-config). Check the default [config file](../cfg/rq_single_view/config.gin) to see the different parameters that can be modified.
158
+
159
+ At least the following parameters should be modified:
160
+
161
+ - `DiscotubeMultiViewAudioDataModule.data_dir` -> Your base data folder.
162
+ - `DiscotubeMultiViewAudioDataModule.filelist_train` -> Filelist of training audio paths relative to the `data_dir` (one file per line).
163
+ - `DiscotubeMultiViewAudioDataModule.filelist_val` -> Same for the tracks on the validation split.
164
+
165
+ 4. Run the experiment
166
+
167
+ ```bash
168
+ python src/train.py cfg/rq_single_view/config.gin
169
+ ```
170
+
171
+ ## Licensing information
172
+
173
+ The code in this repository is available under [AGPL-3.0 license](https://www.gnu.org/licenses/agpl-3.0.en.html) license.
174
+ The model weights are available under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license for non-commercial applications.
175
+ [Contact us](https://www.upf.edu/web/mtg/contact) for more information.
176
+
177
+ ## Citation
178
+
179
+ If you find our work helpful or inspiring, please feel free to cite it:
180
+
181
+ ```bibtex
182
+ @article{omar-rq-2025,
183
+ author={Font-Clos, Francesc and Serra, Xavier},
184
+ title={OMAR-RQ: Open Music Audio Representation Model Trained with Multi-Feature Masked Token Prediction},
185
+ journal={arXiv preprint arXiv:2507.03482},
186
+ year={2025},
187
+ }
188
+ ```