davanstrien HF Staff commited on
Commit
8d25b68
Β·
1 Parent(s): 6a9905a
Files changed (1) hide show
  1. README.md +22 -2
README.md CHANGED
@@ -18,12 +18,14 @@ Batch text classification using BERT-style encoder models (e.g., BERT, RoBERTa,
18
  **Note**: This script is specifically for encoder-only classification models, not generative LLMs.
19
 
20
  **Features:**
 
21
  - πŸš€ High-throughput batch processing
22
  - 🏷️ Automatic label mapping from model config
23
  - πŸ“Š Confidence scores for predictions
24
  - πŸ€— Direct integration with Hugging Face Hub
25
 
26
  **Usage:**
 
27
  ```bash
28
  # Local execution (requires GPU)
29
  uv run classify-dataset.py \
@@ -35,6 +37,7 @@ uv run classify-dataset.py \
35
  ```
36
 
37
  **HF Jobs execution:**
 
38
  ```bash
39
  hf jobs uv run \
40
  --flavor l4x1 \
@@ -52,6 +55,7 @@ hf jobs uv run \
52
  Generate responses for chat-formatted prompts using generative LLMs (e.g., Llama, Qwen, Mistral) with vLLM's high-performance inference engine.
53
 
54
  **Features:**
 
55
  - πŸ’¬ Automatic chat template application
56
  - πŸ”€ Multi-GPU tensor parallelism support
57
  - πŸ“ Smart filtering for prompts exceeding context length
@@ -60,6 +64,7 @@ Generate responses for chat-formatted prompts using generative LLMs (e.g., Llama
60
  - πŸŽ›οΈ Full control over sampling parameters
61
 
62
  **Usage:**
 
63
  ```bash
64
  # Local execution with default Qwen model
65
  uv run generate-responses.py \
@@ -79,12 +84,13 @@ uv run generate-responses.py \
79
  ```
80
 
81
  **HF Jobs execution (multi-GPU):**
 
82
  ```bash
83
  hf jobs uv run \
84
  --flavor l4x4 \
85
  --image vllm/vllm-openai \
86
  -e UV_PRERELEASE=if-necessary \
87
- -e HF_TOKEN=hf_*** \
88
  https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \
89
  davanstrien/cards_with_prompts \
90
  davanstrien/test-generated-responses \
@@ -97,6 +103,7 @@ hf jobs uv run \
97
  ## 🎯 Requirements
98
 
99
  All scripts in this collection require:
 
100
  - **NVIDIA GPU** with CUDA support
101
  - **Python 3.10+**
102
  - **UV package manager** ([install UV](https://docs.astral.sh/uv/getting-started/installation/))
@@ -104,6 +111,7 @@ All scripts in this collection require:
104
  ## πŸš€ Performance Tips
105
 
106
  ### GPU Selection
 
107
  - **L4 GPU** (`--flavor l4x1`): Best value for classification and smaller models
108
  - **L4x4** (`--flavor l4x4`): Multi-GPU setup for large models (30B+ parameters)
109
  - **A10 GPU** (`--flavor a10g-large`): Higher memory for larger models
@@ -111,16 +119,20 @@ All scripts in this collection require:
111
  - Adjust batch size and tensor parallelism based on GPU configuration
112
 
113
  ### Batch Sizes
 
114
  - **Classification**: Start with 10,000 locally, up to 100,000 on HF Jobs
115
  - **Generation**: vLLM handles batching automatically - no manual configuration needed
116
 
117
  ### Multi-GPU Tensor Parallelism
 
118
  - Auto-detects available GPUs by default
119
  - Use `--tensor-parallel-size` to manually specify
120
  - Required for models larger than single GPU memory (e.g., 30B+ models)
121
 
122
  ### Handling Long Contexts
 
123
  The generate-responses.py script includes smart prompt filtering:
 
124
  - **Default behavior**: Skips prompts exceeding max_model_len
125
  - **Use `--max-model-len`**: Limit context to reduce memory usage
126
  - **Use `--no-skip-long-prompts`**: Fail on long prompts instead of skipping
@@ -129,6 +141,7 @@ The generate-responses.py script includes smart prompt filtering:
129
  ## πŸ“š About vLLM
130
 
131
  vLLM is a high-throughput inference engine optimized for:
 
132
  - Fast model serving with PagedAttention
133
  - Efficient batch processing
134
  - Support for various model architectures
@@ -137,13 +150,16 @@ vLLM is a high-throughput inference engine optimized for:
137
  ## πŸ”§ Technical Details
138
 
139
  ### UV Script Benefits
 
140
  - **Zero setup**: Dependencies install automatically on first run
141
  - **Reproducible**: Locked dependencies ensure consistent behavior
142
  - **Self-contained**: Everything needed is in the script file
143
  - **Direct execution**: Run from local files or URLs
144
 
145
  ### Dependencies
 
146
  Scripts use UV's inline metadata for automatic dependency management:
 
147
  ```python
148
  # /// script
149
  # requires-python = ">=3.10"
@@ -161,15 +177,18 @@ Scripts use UV's inline metadata for automatic dependency management:
161
  For bleeding-edge features, use the `UV_PRERELEASE=if-necessary` environment variable to allow pre-release versions when needed.
162
 
163
  ### Docker Image
 
164
  For HF Jobs, we recommend the official vLLM Docker image: `vllm/vllm-openai`
165
 
166
  This image includes:
 
167
  - Pre-installed CUDA libraries
168
  - vLLM and all dependencies
169
  - UV package manager
170
  - Optimized for GPU inference
171
 
172
  ### Environment Variables
 
173
  - `HF_TOKEN`: Your Hugging Face authentication token (auto-detected if logged in)
174
  - `UV_PRERELEASE=if-necessary`: Allow pre-release packages when required
175
  - `HF_HUB_ENABLE_HF_TRANSFER=1`: Automatically enabled for faster downloads
@@ -177,6 +196,7 @@ This image includes:
177
  ## πŸ“ Contributing
178
 
179
  Have a vLLM script to share? We welcome contributions that:
 
180
  - Solve real inference problems
181
  - Include clear documentation
182
  - Follow UV script best practices
@@ -186,4 +206,4 @@ Have a vLLM script to share? We welcome contributions that:
186
 
187
  - [vLLM Documentation](https://docs.vllm.ai/)
188
  - [UV Documentation](https://docs.astral.sh/uv/)
189
- - [UV Scripts Organization](https://huggingface.co/uv-scripts)
 
18
  **Note**: This script is specifically for encoder-only classification models, not generative LLMs.
19
 
20
  **Features:**
21
+
22
  - πŸš€ High-throughput batch processing
23
  - 🏷️ Automatic label mapping from model config
24
  - πŸ“Š Confidence scores for predictions
25
  - πŸ€— Direct integration with Hugging Face Hub
26
 
27
  **Usage:**
28
+
29
  ```bash
30
  # Local execution (requires GPU)
31
  uv run classify-dataset.py \
 
37
  ```
38
 
39
  **HF Jobs execution:**
40
+
41
  ```bash
42
  hf jobs uv run \
43
  --flavor l4x1 \
 
55
  Generate responses for chat-formatted prompts using generative LLMs (e.g., Llama, Qwen, Mistral) with vLLM's high-performance inference engine.
56
 
57
  **Features:**
58
+
59
  - πŸ’¬ Automatic chat template application
60
  - πŸ”€ Multi-GPU tensor parallelism support
61
  - πŸ“ Smart filtering for prompts exceeding context length
 
64
  - πŸŽ›οΈ Full control over sampling parameters
65
 
66
  **Usage:**
67
+
68
  ```bash
69
  # Local execution with default Qwen model
70
  uv run generate-responses.py \
 
84
  ```
85
 
86
  **HF Jobs execution (multi-GPU):**
87
+
88
  ```bash
89
  hf jobs uv run \
90
  --flavor l4x4 \
91
  --image vllm/vllm-openai \
92
  -e UV_PRERELEASE=if-necessary \
93
+ -s HF_TOKEN=hf_*** \
94
  https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \
95
  davanstrien/cards_with_prompts \
96
  davanstrien/test-generated-responses \
 
103
  ## 🎯 Requirements
104
 
105
  All scripts in this collection require:
106
+
107
  - **NVIDIA GPU** with CUDA support
108
  - **Python 3.10+**
109
  - **UV package manager** ([install UV](https://docs.astral.sh/uv/getting-started/installation/))
 
111
  ## πŸš€ Performance Tips
112
 
113
  ### GPU Selection
114
+
115
  - **L4 GPU** (`--flavor l4x1`): Best value for classification and smaller models
116
  - **L4x4** (`--flavor l4x4`): Multi-GPU setup for large models (30B+ parameters)
117
  - **A10 GPU** (`--flavor a10g-large`): Higher memory for larger models
 
119
  - Adjust batch size and tensor parallelism based on GPU configuration
120
 
121
  ### Batch Sizes
122
+
123
  - **Classification**: Start with 10,000 locally, up to 100,000 on HF Jobs
124
  - **Generation**: vLLM handles batching automatically - no manual configuration needed
125
 
126
  ### Multi-GPU Tensor Parallelism
127
+
128
  - Auto-detects available GPUs by default
129
  - Use `--tensor-parallel-size` to manually specify
130
  - Required for models larger than single GPU memory (e.g., 30B+ models)
131
 
132
  ### Handling Long Contexts
133
+
134
  The generate-responses.py script includes smart prompt filtering:
135
+
136
  - **Default behavior**: Skips prompts exceeding max_model_len
137
  - **Use `--max-model-len`**: Limit context to reduce memory usage
138
  - **Use `--no-skip-long-prompts`**: Fail on long prompts instead of skipping
 
141
  ## πŸ“š About vLLM
142
 
143
  vLLM is a high-throughput inference engine optimized for:
144
+
145
  - Fast model serving with PagedAttention
146
  - Efficient batch processing
147
  - Support for various model architectures
 
150
  ## πŸ”§ Technical Details
151
 
152
  ### UV Script Benefits
153
+
154
  - **Zero setup**: Dependencies install automatically on first run
155
  - **Reproducible**: Locked dependencies ensure consistent behavior
156
  - **Self-contained**: Everything needed is in the script file
157
  - **Direct execution**: Run from local files or URLs
158
 
159
  ### Dependencies
160
+
161
  Scripts use UV's inline metadata for automatic dependency management:
162
+
163
  ```python
164
  # /// script
165
  # requires-python = ">=3.10"
 
177
  For bleeding-edge features, use the `UV_PRERELEASE=if-necessary` environment variable to allow pre-release versions when needed.
178
 
179
  ### Docker Image
180
+
181
  For HF Jobs, we recommend the official vLLM Docker image: `vllm/vllm-openai`
182
 
183
  This image includes:
184
+
185
  - Pre-installed CUDA libraries
186
  - vLLM and all dependencies
187
  - UV package manager
188
  - Optimized for GPU inference
189
 
190
  ### Environment Variables
191
+
192
  - `HF_TOKEN`: Your Hugging Face authentication token (auto-detected if logged in)
193
  - `UV_PRERELEASE=if-necessary`: Allow pre-release packages when required
194
  - `HF_HUB_ENABLE_HF_TRANSFER=1`: Automatically enabled for faster downloads
 
196
  ## πŸ“ Contributing
197
 
198
  Have a vLLM script to share? We welcome contributions that:
199
+
200
  - Solve real inference problems
201
  - Include clear documentation
202
  - Follow UV script best practices
 
206
 
207
  - [vLLM Documentation](https://docs.vllm.ai/)
208
  - [UV Documentation](https://docs.astral.sh/uv/)
209
+ - [UV Scripts Organization](https://huggingface.co/uv-scripts)