Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -9,6 +9,7 @@ library_name: transformers
|
|
9 |
<h1>PARD: Accelerating LLM Inference with Low-Cost PARallel Draft Model Adaptation</h1>
|
10 |
</div>
|
11 |
|
|
|
12 |
<p align="center"> |
|
13 |
<a href="https://arxiv.org/abs/2504.18583"><b>Paper</b></a> |
|
14 |
<a href="https://github.com/AMD-AIG-AIMA/PARD"><b>Github</b></a> |
|
@@ -16,6 +17,7 @@ library_name: transformers
|
|
16 |
</p>
|
17 |
|
18 |
|
|
|
19 |
## Introduction
|
20 |
|
21 |
PARD is a high-performance speculative decoding method that also enables low-cost adaptation of autoregressive draft models into parallel draft models. It offers the following advantages:
|
@@ -26,6 +28,10 @@ PARD is a high-performance speculative decoding method that also enables low-cos
|
|
26 |
|
27 |
- **High Performance**: When integrated into an optimized inference framework called Transformers+ PARD delivers up to a 4.08× speedup, with LLaMA3.1 8B reaches a state-of-the-art 311.5 tokens per second. When integrated into vLLM, PARD delivers up to 3.06× speedup, outperforming other speculative decoding methods in vLLM by 1.51×.
|
28 |
|
|
|
|
|
|
|
|
|
29 |
|
30 |
## Model Weights
|
31 |
|
@@ -50,3 +56,4 @@ Please visit [PARD](https://github.com/AMD-AIG-AIMA/PARD) repo for more informat
|
|
50 |
year={2025}
|
51 |
}
|
52 |
```
|
|
|
|
9 |
<h1>PARD: Accelerating LLM Inference with Low-Cost PARallel Draft Model Adaptation</h1>
|
10 |
</div>
|
11 |
|
12 |
+
|
13 |
<p align="center"> |
|
14 |
<a href="https://arxiv.org/abs/2504.18583"><b>Paper</b></a> |
|
15 |
<a href="https://github.com/AMD-AIG-AIMA/PARD"><b>Github</b></a> |
|
|
|
17 |
</p>
|
18 |
|
19 |
|
20 |
+
|
21 |
## Introduction
|
22 |
|
23 |
PARD is a high-performance speculative decoding method that also enables low-cost adaptation of autoregressive draft models into parallel draft models. It offers the following advantages:
|
|
|
28 |
|
29 |
- **High Performance**: When integrated into an optimized inference framework called Transformers+ PARD delivers up to a 4.08× speedup, with LLaMA3.1 8B reaches a state-of-the-art 311.5 tokens per second. When integrated into vLLM, PARD delivers up to 3.06× speedup, outperforming other speculative decoding methods in vLLM by 1.51×.
|
30 |
|
31 |
+
<p align="center">
|
32 |
+
<picture><img src="https://cdn-uploads.huggingface.co/production/uploads/630cb01cc169245d78fe76b6/Dh-7wE-l0YAfU9lXWssKf.png" width="90%"></picture>
|
33 |
+
<br><div align="center" width="90%"><em>AR and AR+ represent baseline auto-regressive generation using Transformers and Transformers+, respectively. VSD denotes vanilla speculative decoding. PARD refers to the proposed method in this work.</em></div><br>
|
34 |
+
</p>
|
35 |
|
36 |
## Model Weights
|
37 |
|
|
|
56 |
year={2025}
|
57 |
}
|
58 |
```
|
59 |
+
|