PsiPi commited on
Commit
4276a5c
1 Parent(s): c53912e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +176 -174
README.md CHANGED
@@ -1,175 +1,177 @@
1
- ---
2
- language:
3
- - en
4
- library_name: stable-audio-tools
5
- license: other
6
- license_name: stable-audio-community
7
- license_link: LICENSE
8
- pipeline_tag: text-to-audio
9
- extra_gated_prompt: By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/LICENSE.md)
10
- and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy).
11
- extra_gated_fields:
12
- Name: text
13
- Email: text
14
- Country: country
15
- Organization or Affiliation: text
16
- Receive email updates and promotions on Stability AI products, services, and research?:
17
- type: select
18
- options:
19
- - 'Yes'
20
- - 'No'
21
- ---
22
-
23
- # Stable Audio Open 1.0
24
-
25
- ![Stable Audio Open logo](./stable_audio_light.png)
26
-
27
- Please note: For commercial use, please refer to [https://stability.ai/license](https://stability.ai/license)
28
-
29
- ## Model Description
30
- `Stable Audio Open 1.0` generates variable-length (up to 47s) stereo audio at 44.1kHz from text prompts. It comprises three components: an autoencoder that compresses waveforms into a manageable sequence length, a T5-based text embedding for text conditioning, and a transformer-based diffusion (DiT) model that operates in the latent space of the autoencoder.
31
-
32
- ## Usage
33
-
34
- This model can be used with:
35
- 1. the [`stable-audio-tools`](https://github.com/Stability-AI/stable-audio-tools) library
36
- 2. the [`diffusers`](https://huggingface.co/docs/diffusers/main/en/index) library
37
-
38
-
39
- ### Using with `stable-audio-tools`
40
-
41
- This model is made to be used with the [`stable-audio-tools`](https://github.com/Stability-AI/stable-audio-tools) library for inference, for example:
42
-
43
- ```python
44
- import torch
45
- import torchaudio
46
- from einops import rearrange
47
- from stable_audio_tools import get_pretrained_model
48
- from stable_audio_tools.inference.generation import generate_diffusion_cond
49
-
50
- device = "cuda" if torch.cuda.is_available() else "cpu"
51
-
52
- # Download model
53
- model, model_config = get_pretrained_model("stabilityai/stable-audio-open-1.0")
54
- sample_rate = model_config["sample_rate"]
55
- sample_size = model_config["sample_size"]
56
-
57
- model = model.to(device)
58
-
59
- # Set up text and timing conditioning
60
- conditioning = [{
61
- "prompt": "128 BPM tech house drum loop",
62
- "seconds_start": 0,
63
- "seconds_total": 30
64
- }]
65
-
66
- # Generate stereo audio
67
- output = generate_diffusion_cond(
68
- model,
69
- steps=100,
70
- cfg_scale=7,
71
- conditioning=conditioning,
72
- sample_size=sample_size,
73
- sigma_min=0.3,
74
- sigma_max=500,
75
- sampler_type="dpmpp-3m-sde",
76
- device=device
77
- )
78
-
79
- # Rearrange audio batch to a single sequence
80
- output = rearrange(output, "b d n -> d (b n)")
81
-
82
- # Peak normalize, clip, convert to int16, and save to file
83
- output = output.to(torch.float32).div(torch.max(torch.abs(output))).clamp(-1, 1).mul(32767).to(torch.int16).cpu()
84
- torchaudio.save("output.wav", output, sample_rate)
85
- ```
86
-
87
- ## Using with `diffusers`
88
-
89
- Make sure you upgrade to the latest version of diffusers: `pip install -U diffusers`. And then you can run:
90
-
91
- ```py
92
- import torch
93
- import soundfile as sf
94
- from diffusers import StableAudioPipeline
95
-
96
- pipe = StableAudioPipeline.from_pretrained("stabilityai/stable-audio-open-1.0", torch_dtype=torch.float16)
97
- pipe = pipe.to("cuda")
98
-
99
- # define the prompts
100
- prompt = "The sound of a hammer hitting a wooden surface."
101
- negative_prompt = "Low quality."
102
-
103
- # set the seed for generator
104
- generator = torch.Generator("cuda").manual_seed(0)
105
-
106
- # run the generation
107
- audio = pipe(
108
- prompt,
109
- negative_prompt=negative_prompt,
110
- num_inference_steps=200,
111
- audio_end_in_s=10.0,
112
- num_waveforms_per_prompt=3,
113
- generator=generator,
114
- ).audios
115
-
116
- output = audio[0].T.float().cpu().numpy()
117
- sf.write("hammer.wav", output, pipe.vae.sampling_rate)
118
-
119
- ```
120
- Refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/index) for more details on optimization and usage.
121
-
122
-
123
-
124
-
125
- ## Model Details
126
- * **Model type**: `Stable Audio Open 1.0` is a latent diffusion model based on a transformer architecture.
127
- * **Language(s)**: English
128
- * **License**: [Stability AI Community License](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/LICENSE.md).
129
- * **Commercial License**: to use this model commercially, please refer to [https://stability.ai/license](https://stability.ai/license)
130
- * **Research Paper**: [https://arxiv.org/abs/2407.14358](https://arxiv.org/abs/2407.14358)
131
-
132
- ## Training dataset
133
-
134
- ### Datasets Used
135
- Our dataset consists of 486492 audio recordings, where 472618 are from Freesound and 13874 are from the Free Music Archive (FMA). All audio files are licensed under CC0, CC BY, or CC Sampling+. This data is used to train our autoencoder and DiT. We use a publicly available pre-trained T5 model ([t5-base](https://huggingface.co/google-t5/t5-base)) for text conditioning.
136
-
137
- ### Attribution
138
- Attribution for all audio recordings used to train Stable Audio Open 1.0 can be found in this repository.
139
- - Freesound attribution [[csv](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/freesound_dataset_attribution2.csv)]
140
- - FMA attribution [[csv](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/fma_dataset_attribution2.csv)]
141
-
142
- ### Mitigations
143
- We conducted an in-depth analysis to ensure no unauthorized copyrighted music was present in our training data before we began training.
144
-
145
- To that end, we first identified music samples in Freesound using the [PANNs](https://github.com/qiuqiangkong/audioset_tagging_cnn) music classifier based on AudioSet classes. The identified music samples had at least 30 seconds of music that was predicted to belong to a music-related class with a threshold of 0.15 (PANNs output probabilities range from 0 to 1). This threshold was determined by classifying known music examples from FMA and ensuring no false negatives were present.
146
-
147
- The identified music samples were sent to Audible Magic’s identification services, a trusted content detection company, to ensure the absence of copyrighted music. Audible Magic flagged suspected copyrighted music, which we subsequently removed before training on the dataset. The majority of the removed content was field recordings in which copyrighted music was playing in the background. Following this procedure, we were left with 266324 CC0, 194840 CC-BY, and 11454 CC Sampling+ audio recordings.
148
-
149
- We also conducted an in-depth analysis to ensure no copyrighted content was present in FMA's subset. In this case, the procedure was slightly different because the FMA subset consists of music signals. We did a metadata search against a large database of copyrighted music (https://www.kaggle.com/datasets/maharshipandya/-spotify-tracks-dataset) and flagged any potential match. The flagged content was reviewed individually by humans. After this process, we ended up with 8967 CC-BY and 4907 CC0 tracks.
150
-
151
-
152
- ## Use and Limitations
153
-
154
-
155
- ### Intended Use
156
- The primary use of Stable Audio Open is research and experimentation on AI-based music and audio generation, including:
157
-
158
- - Research efforts to better understand the limitations of generative models and further improve the state of science.
159
- - Generation of music and audio guided by text to explore current abilities of generative AI models by machine learning practitioners and artists.
160
-
161
-
162
- ### Out-of-Scope Use Cases
163
- The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate audio or music pieces that create hostile or alienating environments for people.
164
-
165
-
166
- ### Limitations
167
- - The model is not able to generate realistic vocals.
168
- - The model has been trained with English descriptions and will not perform as well in other languages.
169
- - The model does not perform equally well for all music styles and cultures.
170
- - The model is better at generating sound effects and field recordings than music.
171
- - It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
172
-
173
-
174
- ### Biases
 
 
175
  The source of data is potentially lacking diversity and all cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres and sound effects that exist. The generated samples from the model will reflect the biases from the training data.
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: stable-audio-tools
5
+ license: other
6
+ license_name: stable-audio-community
7
+ license_link: LICENSE
8
+ pipeline_tag: text-to-audio
9
+ extra_gated_prompt: By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/LICENSE.md)
10
+ and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy).
11
+ extra_gated_fields:
12
+ Name: text
13
+ Email: text
14
+ Country: country
15
+ Organization or Affiliation: text
16
+ Receive email updates and promotions on Stability AI products, services, and research?:
17
+ type: select
18
+ options:
19
+ - 'Yes'
20
+ - 'No'
21
+ ---
22
+
23
+ # Stable Audio Open 1.0
24
+
25
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/64a22257d3149e05bc6d259f/iKlxzZY0svXP9j6j1hiWW.webp)
26
+
27
+ Please note: For commercial use, please refer to [https://stability.ai/license](https://stability.ai/license)
28
+
29
+ This version was finetuned by twobob from ![this dataset](https://www.kaggle.com/datasets/twobob/moar-bobtex-n-friends-gpu-fodder)
30
+
31
+ ## Model Description
32
+ `Stable Audio Open 1.0` generates variable-length (up to 47s) stereo audio at 44.1kHz from text prompts. It comprises three components: an autoencoder that compresses waveforms into a manageable sequence length, a T5-based text embedding for text conditioning, and a transformer-based diffusion (DiT) model that operates in the latent space of the autoencoder.
33
+
34
+ ## Usage
35
+
36
+ This model can be used with:
37
+ 1. the [`stable-audio-tools`](https://github.com/Stability-AI/stable-audio-tools) library
38
+ 2. the [`diffusers`](https://huggingface.co/docs/diffusers/main/en/index) library
39
+
40
+
41
+ ### Using with `stable-audio-tools`
42
+
43
+ This model is made to be used with the [`stable-audio-tools`](https://github.com/Stability-AI/stable-audio-tools) library for inference, for example:
44
+
45
+ ```python
46
+ import torch
47
+ import torchaudio
48
+ from einops import rearrange
49
+ from stable_audio_tools import get_pretrained_model
50
+ from stable_audio_tools.inference.generation import generate_diffusion_cond
51
+
52
+ device = "cuda" if torch.cuda.is_available() else "cpu"
53
+
54
+ # Download model
55
+ model, model_config = get_pretrained_model("stabilityai/stable-audio-open-1.0")
56
+ sample_rate = model_config["sample_rate"]
57
+ sample_size = model_config["sample_size"]
58
+
59
+ model = model.to(device)
60
+
61
+ # Set up text and timing conditioning
62
+ conditioning = [{
63
+ "prompt": "128 BPM tech house drum loop",
64
+ "seconds_start": 0,
65
+ "seconds_total": 30
66
+ }]
67
+
68
+ # Generate stereo audio
69
+ output = generate_diffusion_cond(
70
+ model,
71
+ steps=100,
72
+ cfg_scale=7,
73
+ conditioning=conditioning,
74
+ sample_size=sample_size,
75
+ sigma_min=0.3,
76
+ sigma_max=500,
77
+ sampler_type="dpmpp-3m-sde",
78
+ device=device
79
+ )
80
+
81
+ # Rearrange audio batch to a single sequence
82
+ output = rearrange(output, "b d n -> d (b n)")
83
+
84
+ # Peak normalize, clip, convert to int16, and save to file
85
+ output = output.to(torch.float32).div(torch.max(torch.abs(output))).clamp(-1, 1).mul(32767).to(torch.int16).cpu()
86
+ torchaudio.save("output.wav", output, sample_rate)
87
+ ```
88
+
89
+ ## Using with `diffusers`
90
+
91
+ Make sure you upgrade to the latest version of diffusers: `pip install -U diffusers`. And then you can run:
92
+
93
+ ```py
94
+ import torch
95
+ import soundfile as sf
96
+ from diffusers import StableAudioPipeline
97
+
98
+ pipe = StableAudioPipeline.from_pretrained("stabilityai/stable-audio-open-1.0", torch_dtype=torch.float16)
99
+ pipe = pipe.to("cuda")
100
+
101
+ # define the prompts
102
+ prompt = "The sound of a hammer hitting a wooden surface."
103
+ negative_prompt = "Low quality."
104
+
105
+ # set the seed for generator
106
+ generator = torch.Generator("cuda").manual_seed(0)
107
+
108
+ # run the generation
109
+ audio = pipe(
110
+ prompt,
111
+ negative_prompt=negative_prompt,
112
+ num_inference_steps=200,
113
+ audio_end_in_s=10.0,
114
+ num_waveforms_per_prompt=3,
115
+ generator=generator,
116
+ ).audios
117
+
118
+ output = audio[0].T.float().cpu().numpy()
119
+ sf.write("hammer.wav", output, pipe.vae.sampling_rate)
120
+
121
+ ```
122
+ Refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/index) for more details on optimization and usage.
123
+
124
+
125
+
126
+
127
+ ## Model Details
128
+ * **Model type**: `Stable Audio Open 1.0` is a latent diffusion model based on a transformer architecture.
129
+ * **Language(s)**: English
130
+ * **License**: [Stability AI Community License](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/LICENSE.md).
131
+ * **Commercial License**: to use this model commercially, please refer to [https://stability.ai/license](https://stability.ai/license)
132
+ * **Research Paper**: [https://arxiv.org/abs/2407.14358](https://arxiv.org/abs/2407.14358)
133
+
134
+ ## Training dataset
135
+
136
+ ### Datasets Used
137
+ Our dataset consists of 486492 audio recordings, where 472618 are from Freesound and 13874 are from the Free Music Archive (FMA). All audio files are licensed under CC0, CC BY, or CC Sampling+. This data is used to train our autoencoder and DiT. We use a publicly available pre-trained T5 model ([t5-base](https://huggingface.co/google-t5/t5-base)) for text conditioning.
138
+
139
+ ### Attribution
140
+ Attribution for all audio recordings used to train Stable Audio Open 1.0 can be found in this repository.
141
+ - Freesound attribution [[csv](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/freesound_dataset_attribution2.csv)]
142
+ - FMA attribution [[csv](https://huggingface.co/stabilityai/stable-audio-open-1.0/blob/main/fma_dataset_attribution2.csv)]
143
+
144
+ ### Mitigations
145
+ We conducted an in-depth analysis to ensure no unauthorized copyrighted music was present in our training data before we began training.
146
+
147
+ To that end, we first identified music samples in Freesound using the [PANNs](https://github.com/qiuqiangkong/audioset_tagging_cnn) music classifier based on AudioSet classes. The identified music samples had at least 30 seconds of music that was predicted to belong to a music-related class with a threshold of 0.15 (PANNs output probabilities range from 0 to 1). This threshold was determined by classifying known music examples from FMA and ensuring no false negatives were present.
148
+
149
+ The identified music samples were sent to Audible Magic’s identification services, a trusted content detection company, to ensure the absence of copyrighted music. Audible Magic flagged suspected copyrighted music, which we subsequently removed before training on the dataset. The majority of the removed content was field recordings in which copyrighted music was playing in the background. Following this procedure, we were left with 266324 CC0, 194840 CC-BY, and 11454 CC Sampling+ audio recordings.
150
+
151
+ We also conducted an in-depth analysis to ensure no copyrighted content was present in FMA's subset. In this case, the procedure was slightly different because the FMA subset consists of music signals. We did a metadata search against a large database of copyrighted music (https://www.kaggle.com/datasets/maharshipandya/-spotify-tracks-dataset) and flagged any potential match. The flagged content was reviewed individually by humans. After this process, we ended up with 8967 CC-BY and 4907 CC0 tracks.
152
+
153
+
154
+ ## Use and Limitations
155
+
156
+
157
+ ### Intended Use
158
+ The primary use of Stable Audio Open is research and experimentation on AI-based music and audio generation, including:
159
+
160
+ - Research efforts to better understand the limitations of generative models and further improve the state of science.
161
+ - Generation of music and audio guided by text to explore current abilities of generative AI models by machine learning practitioners and artists.
162
+
163
+
164
+ ### Out-of-Scope Use Cases
165
+ The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate audio or music pieces that create hostile or alienating environments for people.
166
+
167
+
168
+ ### Limitations
169
+ - The model is not able to generate realistic vocals.
170
+ - The model has been trained with English descriptions and will not perform as well in other languages.
171
+ - The model does not perform equally well for all music styles and cultures.
172
+ - The model is better at generating sound effects and field recordings than music.
173
+ - It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
174
+
175
+
176
+ ### Biases
177
  The source of data is potentially lacking diversity and all cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres and sound effects that exist. The generated samples from the model will reflect the biases from the training data.