Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -28,11 +28,17 @@ PARD is a high-performance speculative decoding method that also enables low-cos
|
|
28 |
|
29 |
- **High Performance**: When integrated into an optimized inference framework called Transformers+ PARD delivers up to a 4.08× speedup, with LLaMA3.1 8B reaches a state-of-the-art 311.5 tokens per second. When integrated into vLLM, PARD delivers up to 3.06× speedup, outperforming other speculative decoding methods in vLLM by 1.51×.
|
30 |
|
|
|
31 |
<p align="center">
|
32 |
-
<
|
33 |
-
|
|
|
|
|
|
|
|
|
34 |
</p>
|
35 |
|
|
|
36 |
## Model Weights
|
37 |
|
38 |
| Model Series | Model Name | Download |
|
@@ -56,4 +62,3 @@ Please visit [PARD](https://github.com/AMD-AIG-AIMA/PARD) repo for more informat
|
|
56 |
year={2025}
|
57 |
}
|
58 |
```
|
59 |
-
|
|
|
28 |
|
29 |
- **High Performance**: When integrated into an optimized inference framework called Transformers+ PARD delivers up to a 4.08× speedup, with LLaMA3.1 8B reaches a state-of-the-art 311.5 tokens per second. When integrated into vLLM, PARD delivers up to 3.06× speedup, outperforming other speculative decoding methods in vLLM by 1.51×.
|
30 |
|
31 |
+
|
32 |
<p align="center">
|
33 |
+
<figure style="display: inline-block; text-align: center;">
|
34 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/630cb01cc169245d78fe76b6/Dh-7wE-l0YAfU9lXWssKf.png" width="90%">
|
35 |
+
<figcaption style="font-style: italic; margin-top: 2px;">
|
36 |
+
AR and AR+ represent baseline auto-regressive generation using Transformers and Transformers+, respectively. VSD denotes vanilla speculative decoding. PARD refers to the proposed method in this work.
|
37 |
+
</figcaption>
|
38 |
+
</figure>
|
39 |
</p>
|
40 |
|
41 |
+
|
42 |
## Model Weights
|
43 |
|
44 |
| Model Series | Model Name | Download |
|
|
|
62 |
year={2025}
|
63 |
}
|
64 |
```
|
|