Commit
Β·
52dc8a2
1
Parent(s):
8d25b68
Remove requirements and performance tips sections from README.md for clarity and conciseness.
Browse files
README.md
CHANGED
@@ -100,29 +100,6 @@ hf jobs uv run \
|
|
100 |
--max-model-len 8000
|
101 |
```
|
102 |
|
103 |
-
## π― Requirements
|
104 |
-
|
105 |
-
All scripts in this collection require:
|
106 |
-
|
107 |
-
- **NVIDIA GPU** with CUDA support
|
108 |
-
- **Python 3.10+**
|
109 |
-
- **UV package manager** ([install UV](https://docs.astral.sh/uv/getting-started/installation/))
|
110 |
-
|
111 |
-
## π Performance Tips
|
112 |
-
|
113 |
-
### GPU Selection
|
114 |
-
|
115 |
-
- **L4 GPU** (`--flavor l4x1`): Best value for classification and smaller models
|
116 |
-
- **L4x4** (`--flavor l4x4`): Multi-GPU setup for large models (30B+ parameters)
|
117 |
-
- **A10 GPU** (`--flavor a10g-large`): Higher memory for larger models
|
118 |
-
- **A100** (`--flavor a100-large`): Maximum performance for demanding workloads
|
119 |
-
- Adjust batch size and tensor parallelism based on GPU configuration
|
120 |
-
|
121 |
-
### Batch Sizes
|
122 |
-
|
123 |
-
- **Classification**: Start with 10,000 locally, up to 100,000 on HF Jobs
|
124 |
-
- **Generation**: vLLM handles batching automatically - no manual configuration needed
|
125 |
-
|
126 |
### Multi-GPU Tensor Parallelism
|
127 |
|
128 |
- Auto-detects available GPUs by default
|
@@ -193,15 +170,6 @@ This image includes:
|
|
193 |
- `UV_PRERELEASE=if-necessary`: Allow pre-release packages when required
|
194 |
- `HF_HUB_ENABLE_HF_TRANSFER=1`: Automatically enabled for faster downloads
|
195 |
|
196 |
-
## π Contributing
|
197 |
-
|
198 |
-
Have a vLLM script to share? We welcome contributions that:
|
199 |
-
|
200 |
-
- Solve real inference problems
|
201 |
-
- Include clear documentation
|
202 |
-
- Follow UV script best practices
|
203 |
-
- Include HF Jobs examples
|
204 |
-
|
205 |
## π Resources
|
206 |
|
207 |
- [vLLM Documentation](https://docs.vllm.ai/)
|
|
|
100 |
--max-model-len 8000
|
101 |
```
|
102 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
### Multi-GPU Tensor Parallelism
|
104 |
|
105 |
- Auto-detects available GPUs by default
|
|
|
170 |
- `UV_PRERELEASE=if-necessary`: Allow pre-release packages when required
|
171 |
- `HF_HUB_ENABLE_HF_TRANSFER=1`: Automatically enabled for faster downloads
|
172 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
173 |
## π Resources
|
174 |
|
175 |
- [vLLM Documentation](https://docs.vllm.ai/)
|