Add comprehensive model card for OMAR-RQ
#1
by
nielsr
HF Staff
- opened
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: audio-classification
|
3 |
+
library_name: omar_rq
|
4 |
+
license: cc-by-nc-sa-4.0
|
5 |
+
tags:
|
6 |
+
- audio-feature-extraction
|
7 |
+
- music
|
8 |
+
---
|
9 |
+
|
10 |
+
# OMAR-RQ: Open Music Audio Representation Model Trained with Multi-Feature Masked Token Prediction
|
11 |
+
|
12 |
+
This repository contains the model weights for **OMAR-RQ**, as presented in the paper [OMAR-RQ: Open Music Audio Representation Model Trained with Multi-Feature Masked Token Prediction](https://huggingface.co/papers/2507.03482).
|
13 |
+
|
14 |
+
## Abstract
|
15 |
+
Developing open-source foundation models is essential for advancing research in music audio understanding and ensuring access to powerful, multipurpose representations for music information retrieval. We present OMAR-RQ, a model trained with self-supervision via masked token classification methodologies using a large-scale dataset with over 330,000 hours of music audio. We experiment with different input features and quantization options, and achieve state-of-the-art performance in music tagging, pitch estimation, chord recognition, beat tracking, segmentation, and difficulty estimation among open self-supervised models. We open-source our training and evaluation pipelines and model weights, available at this https URL .
|
16 |
+
|
17 |
+
## Code
|
18 |
+
The training, validation, and inference code, along with further details, is available at the official GitHub repository: [https://github.com/MTG/OMAR-RQ](https://github.com/MTG/OMAR-RQ).
|
19 |
+
|
20 |
+
## Inference
|
21 |
+
You can load an OMAR-RQ model directly by specifying its Hugging Face model ID. First, install the library:
|
22 |
+
|
23 |
+
```bash
|
24 |
+
pip install omar-rq
|
25 |
+
```
|
26 |
+
|
27 |
+
Then, use the following Python code for embedding extraction:
|
28 |
+
|
29 |
+
```python
|
30 |
+
import torch
|
31 |
+
from omar_rq import get_model
|
32 |
+
|
33 |
+
# Embedding extraction example
|
34 |
+
x = torch.randn(1, 16000 * 4).cpu() # Example audio input (batch_size, samples)
|
35 |
+
|
36 |
+
model_id = "mtg-upf/omar-rq-multifeature-25hz-fsq" # This repository's model ID
|
37 |
+
model = get_model(model_id=model_id, device="cpu")
|
38 |
+
|
39 |
+
embeddings = model.extract_embeddings(x, layers=[6])
|
40 |
+
|
41 |
+
timestamps = torch.arange(embeddings.shape[2]) / model.eps
|
42 |
+
|
43 |
+
print(f"Extracted embeddings shape: {embeddings.shape}")
|
44 |
+
print(f"Number of timestamps: {len(timestamps)}")
|
45 |
+
```
|
46 |
+
|
47 |
+
For more details on `get_model` and `extract_embeddings` usage, please refer to the [GitHub repository](https://github.com/MTG/OMAR-RQ).
|
48 |
+
|
49 |
+
## Available Models
|
50 |
+
OMAR-RQ models are offered in different configurations, each with its own strengths and weaknesses.
|
51 |
+
Models based on mel spectrogram (**base** and **multicodebook**) tend to perform better on semantic tasks such as auto-tagging, structure recognition, and difficulty estimation.
|
52 |
+
On the other hand, **multifeature-24hz-fsq** offers the best performance in tonal and temporal tasks such as pitch and chord estimation, and beat tracking.
|
53 |
+
|
54 |
+
| Model | Hugging Face ID | Input | Rate | Tagging | Difficulty | Pitch | Chord | Beat | Structure |
|
55 |
+
|---|---|---|---|---|---|---|---|---|---|
|
56 |
+
| | | Hz | _mAP_ | _MSE_ | _acc._ | _acc._ | _F1_ | _acc._ |
|
57 |
+
| **base** | [mtg-upf/omar-rq-base](https://huggingface.co/mtg-upf/omar-rq-base) | mel | 15.63 | .482 | **1.65** | .892 | .657 | .783 | **.647** |
|
58 |
+
| **multicodebook** | [mtg-upf/omar-rq-multicodebook](https://huggingface.co/mtg-upf/omar-rq-multicodebook) | mel | 15.63 | **.488** | 1.66 | .897 | .675 | .775 | .639 |
|
59 |
+
| **multifeature** | [mtg-upf/omar-rq-multifeature](https://huggingface.co/mtg-upf/omar-rq-multifeature) | audio | 18.75 | .467 | 1.76 | .938 | .734 | .833 | .623 |
|
60 |
+
| **multifeature-25hz** | [mtg-upf/omar-rq-multifeature-25hz](https://huggingface.co/mtg-upf/omar-rq-multifeature-25hz) | audio | 25 | .463 | 1.79 | .932 | .728 | .848 | .628 |
|
61 |
+
| **multifeature-25hz-fsq**| [mtg-upf/omar-rq-multifeature-25hz-fsq](https://huggingface.co/mtg-upf/omar-rq-multifeature-25hz-fsq) | audio | 25 | .463 | 1.71 | **.940**| **.749**| **.855** | .628 |
|
62 |
+
|
63 |
+
## License
|
64 |
+
The code in the [OMAR-RQ GitHub repository](https://github.com/MTG/OMAR-RQ) is available under the [AGPL-3.0 license](https://www.gnu.org/licenses/agpl-3.0.en.html).
|
65 |
+
The model weights on this Hugging Face Hub are released under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/) for non-commercial applications.
|
66 |
+
|
67 |
+
## Citation
|
68 |
+
If you find this work helpful or inspiring, please feel free to cite it using the following BibTeX entry:
|
69 |
+
|
70 |
+
```bibtex
|
71 |
+
@article{fust,
|
72 |
+
title={OMAR-RQ: Open Music Audio Representation Model Trained with Multi-Feature Masked Token Prediction},
|
73 |
+
author={Fust, Albert and Pons, Jordi and Bogdanov, Dmitry and Oñoro-Rubio, Daniel and Gómez, Emilia},
|
74 |
+
journal={arXiv preprint arXiv:2507.03482},
|
75 |
+
year={2025}
|
76 |
+
}
|
77 |
+
```
|