Sergei6000
commited on
Commit
•
dc64981
1
Parent(s):
82c337d
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,204 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
4-Bit Quant of: https://huggingface.co/Qwen/Qwen2-Audio-7B-Instruct
|
2 |
+
Since there exists no other quant of this I guess I upload it here.
|
3 |
+
Worked fine with english and japanese in my tests.
|
4 |
+
|
5 |
+
---
|
6 |
+
license: apache-2.0
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
tags:
|
10 |
+
- chat
|
11 |
+
- audio
|
12 |
+
---
|
13 |
+
|
14 |
+
# Qwen2-Audio-7B-Instruct
|
15 |
+
|
16 |
+
## Introduction
|
17 |
+
|
18 |
+
Qwen2-Audio is the new series of Qwen large audio-language models. Qwen2-Audio is capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions. We introduce two distinct audio interaction modes:
|
19 |
+
|
20 |
+
* voice chat: users can freely engage in voice interactions with Qwen2-Audio without text input;
|
21 |
+
|
22 |
+
* audio analysis: users could provide audio and text instructions for analysis during the interaction;
|
23 |
+
|
24 |
+
We release Qwen2-Audio-7B and Qwen2-Audio-7B-Instruct, which are pretrained model and chat model respectively.
|
25 |
+
|
26 |
+
For more details, please refer to our [Blog](https://qwenlm.github.io/blog/qwen2-audio/), [GitHub](https://github.com/QwenLM/Qwen2-Audio), and [Report](https://www.arxiv.org/abs/2407.10759).
|
27 |
+
<br>
|
28 |
+
|
29 |
+
|
30 |
+
## Requirements
|
31 |
+
The code of Qwen2-Audio has been in the latest Hugging face transformers and we advise you to build from source with command `pip install git+https://github.com/huggingface/transformers`, or you might encounter the following error:
|
32 |
+
```
|
33 |
+
KeyError: 'qwen2-audio'
|
34 |
+
```
|
35 |
+
|
36 |
+
## Quickstart
|
37 |
+
|
38 |
+
In the following, we demonstrate how to use `Qwen2-Audio-7B-Instruct` for the inference, supporting both voice chat and audio analysis modes. Note that we have used the ChatML format for dialog, in this demo we show how to leverage `apply_chat_template` for this purpose.
|
39 |
+
|
40 |
+
### Voice Chat Inference
|
41 |
+
In the voice chat mode, users can freely engage in voice interactions with Qwen2-Audio without text input:
|
42 |
+
```python
|
43 |
+
from io import BytesIO
|
44 |
+
from urllib.request import urlopen
|
45 |
+
import librosa
|
46 |
+
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
|
47 |
+
|
48 |
+
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
|
49 |
+
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")
|
50 |
+
|
51 |
+
conversation = [
|
52 |
+
{"role": "user", "content": [
|
53 |
+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/guess_age_gender.wav"},
|
54 |
+
]},
|
55 |
+
{"role": "assistant", "content": "Yes, the speaker is female and in her twenties."},
|
56 |
+
{"role": "user", "content": [
|
57 |
+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/translate_to_chinese.wav"},
|
58 |
+
]},
|
59 |
+
]
|
60 |
+
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
|
61 |
+
audios = []
|
62 |
+
for message in conversation:
|
63 |
+
if isinstance(message["content"], list):
|
64 |
+
for ele in message["content"]:
|
65 |
+
if ele["type"] == "audio":
|
66 |
+
audios.append(librosa.load(
|
67 |
+
BytesIO(urlopen(ele['audio_url']).read()),
|
68 |
+
sr=processor.feature_extractor.sampling_rate)[0]
|
69 |
+
)
|
70 |
+
|
71 |
+
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
|
72 |
+
inputs.input_ids = inputs.input_ids.to("cuda")
|
73 |
+
|
74 |
+
generate_ids = model.generate(**inputs, max_length=256)
|
75 |
+
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
|
76 |
+
|
77 |
+
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
78 |
+
```
|
79 |
+
|
80 |
+
### Audio Analysis Inference
|
81 |
+
In the audio analysis, users could provide both audio and text instructions for analysis:
|
82 |
+
```python
|
83 |
+
from io import BytesIO
|
84 |
+
from urllib.request import urlopen
|
85 |
+
import librosa
|
86 |
+
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
|
87 |
+
|
88 |
+
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
|
89 |
+
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")
|
90 |
+
|
91 |
+
conversation = [
|
92 |
+
{'role': 'system', 'content': 'You are a helpful assistant.'},
|
93 |
+
{"role": "user", "content": [
|
94 |
+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"},
|
95 |
+
{"type": "text", "text": "What's that sound?"},
|
96 |
+
]},
|
97 |
+
{"role": "assistant", "content": "It is the sound of glass shattering."},
|
98 |
+
{"role": "user", "content": [
|
99 |
+
{"type": "text", "text": "What can you do when you hear that?"},
|
100 |
+
]},
|
101 |
+
{"role": "assistant", "content": "Stay alert and cautious, and check if anyone is hurt or if there is any damage to property."},
|
102 |
+
{"role": "user", "content": [
|
103 |
+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"},
|
104 |
+
{"type": "text", "text": "What does the person say?"},
|
105 |
+
]},
|
106 |
+
]
|
107 |
+
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
|
108 |
+
audios = []
|
109 |
+
for message in conversation:
|
110 |
+
if isinstance(message["content"], list):
|
111 |
+
for ele in message["content"]:
|
112 |
+
if ele["type"] == "audio":
|
113 |
+
audios.append(
|
114 |
+
librosa.load(
|
115 |
+
BytesIO(urlopen(ele['audio_url']).read()),
|
116 |
+
sr=processor.feature_extractor.sampling_rate)[0]
|
117 |
+
)
|
118 |
+
|
119 |
+
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
|
120 |
+
inputs.input_ids = inputs.input_ids.to("cuda")
|
121 |
+
|
122 |
+
generate_ids = model.generate(**inputs, max_length=256)
|
123 |
+
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
|
124 |
+
|
125 |
+
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
126 |
+
```
|
127 |
+
|
128 |
+
### Batch Inference
|
129 |
+
We also support batch inference:
|
130 |
+
```python
|
131 |
+
from io import BytesIO
|
132 |
+
from urllib.request import urlopen
|
133 |
+
import librosa
|
134 |
+
from transformers import Qwen2AudioForConditionalGeneration, AutoProcessor
|
135 |
+
|
136 |
+
processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct")
|
137 |
+
model = Qwen2AudioForConditionalGeneration.from_pretrained("Qwen/Qwen2-Audio-7B-Instruct", device_map="auto")
|
138 |
+
|
139 |
+
conversation1 = [
|
140 |
+
{"role": "user", "content": [
|
141 |
+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/glass-breaking-151256.mp3"},
|
142 |
+
{"type": "text", "text": "What's that sound?"},
|
143 |
+
]},
|
144 |
+
{"role": "assistant", "content": "It is the sound of glass shattering."},
|
145 |
+
{"role": "user", "content": [
|
146 |
+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/f2641_0_throatclearing.wav"},
|
147 |
+
{"type": "text", "text": "What can you hear?"},
|
148 |
+
]}
|
149 |
+
]
|
150 |
+
|
151 |
+
conversation2 = [
|
152 |
+
{"role": "user", "content": [
|
153 |
+
{"type": "audio", "audio_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2-Audio/audio/1272-128104-0000.flac"},
|
154 |
+
{"type": "text", "text": "What does the person say?"},
|
155 |
+
]},
|
156 |
+
]
|
157 |
+
|
158 |
+
conversations = [conversation1, conversation2]
|
159 |
+
|
160 |
+
text = [processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False) for conversation in conversations]
|
161 |
+
|
162 |
+
audios = []
|
163 |
+
for conversation in conversations:
|
164 |
+
for message in conversation:
|
165 |
+
if isinstance(message["content"], list):
|
166 |
+
for ele in message["content"]:
|
167 |
+
if ele["type"] == "audio":
|
168 |
+
audios.append(
|
169 |
+
librosa.load(
|
170 |
+
BytesIO(urlopen(ele['audio_url']).read()),
|
171 |
+
sr=processor.feature_extractor.sampling_rate)[0]
|
172 |
+
)
|
173 |
+
|
174 |
+
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
|
175 |
+
inputs['input_ids'] = inputs['input_ids'].to("cuda")
|
176 |
+
inputs.input_ids = inputs.input_ids.to("cuda")
|
177 |
+
|
178 |
+
generate_ids = model.generate(**inputs, max_length=256)
|
179 |
+
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
|
180 |
+
|
181 |
+
response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
182 |
+
```
|
183 |
+
|
184 |
+
## Citation
|
185 |
+
|
186 |
+
If you find our work helpful, feel free to give us a cite.
|
187 |
+
|
188 |
+
```BibTeX
|
189 |
+
@article{Qwen2-Audio,
|
190 |
+
title={Qwen2-Audio Technical Report},
|
191 |
+
author={Chu, Yunfei and Xu, Jin and Yang, Qian and Wei, Haojie and Wei, Xipin and Guo, Zhifang and Leng, Yichong and Lv, Yuanjun and He, Jinzheng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
|
192 |
+
journal={arXiv preprint arXiv:2407.10759},
|
193 |
+
year={2024}
|
194 |
+
}
|
195 |
+
```
|
196 |
+
|
197 |
+
```BibTeX
|
198 |
+
@article{Qwen-Audio,
|
199 |
+
title={Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models},
|
200 |
+
author={Chu, Yunfei and Xu, Jin and Zhou, Xiaohuan and Yang, Qian and Zhang, Shiliang and Yan, Zhijie and Zhou, Chang and Zhou, Jingren},
|
201 |
+
journal={arXiv preprint arXiv:2311.07919},
|
202 |
+
year={2023}
|
203 |
+
}
|
204 |
+
```
|