davanstrien HF Staff commited on
Commit
6a9905a
Β·
1 Parent(s): 68dd0ca

Update README.md with enhanced usage instructions for classify-dataset.py and generate-responses.py, including multi-GPU support and environment variable details.

Browse files
Files changed (1) hide show
  1. README.md +91 -27
README.md CHANGED
@@ -36,20 +36,62 @@ uv run classify-dataset.py \
36
 
37
  **HF Jobs execution:**
38
  ```bash
39
- hfjobs run \
40
  --flavor l4x1 \
41
- --secret HF_TOKEN=$(python -c "from huggingface_hub import HfFolder; print(HfFolder.get_token())") \
42
- vllm/vllm-openai:latest \
43
- /bin/bash -c '
44
- uv run https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
45
- davanstrien/ModernBERT-base-is-new-arxiv-dataset \
46
- username/input-dataset \
47
- username/output-dataset \
48
- --inference-column text \
49
- --batch-size 100000
50
- ' \
51
- --project vllm-classify \
52
- --name my-classification-job
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ```
54
 
55
  ## 🎯 Requirements
@@ -62,13 +104,27 @@ All scripts in this collection require:
62
  ## πŸš€ Performance Tips
63
 
64
  ### GPU Selection
65
- - **L4 GPU** (`--flavor l4x1`): Best value for classification tasks
66
- - **A10 GPU** (`--flavor a10`): Higher memory for larger models
67
- - Adjust batch size based on GPU memory
 
 
68
 
69
  ### Batch Sizes
70
- - **Local GPUs**: Start with 10,000 and adjust based on memory
71
- - **HF Jobs**: Can use larger batches (50,000-100,000) with cloud GPUs
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ## πŸ“š About vLLM
74
 
@@ -87,22 +143,25 @@ vLLM is a high-throughput inference engine optimized for:
87
  - **Direct execution**: Run from local files or URLs
88
 
89
  ### Dependencies
90
- Scripts use UV's inline metadata with custom package indexes for vLLM's optimized builds:
91
  ```python
92
  # /// script
93
  # requires-python = ">=3.10"
94
- # dependencies = ["vllm", "datasets", "torch", ...]
95
- #
96
- # [[tool.uv.index]]
97
- # url = "https://flashinfer.ai/whl/cu126/torch2.6"
98
- #
99
- # [[tool.uv.index]]
100
- # url = "https://wheels.vllm.ai/nightly"
 
101
  # ///
102
  ```
103
 
 
 
104
  ### Docker Image
105
- For HF Jobs, we use the official vLLM Docker image: `vllm/vllm-openai:latest`
106
 
107
  This image includes:
108
  - Pre-installed CUDA libraries
@@ -110,6 +169,11 @@ This image includes:
110
  - UV package manager
111
  - Optimized for GPU inference
112
 
 
 
 
 
 
113
  ## πŸ“ Contributing
114
 
115
  Have a vLLM script to share? We welcome contributions that:
 
36
 
37
  **HF Jobs execution:**
38
  ```bash
39
+ hf jobs uv run \
40
  --flavor l4x1 \
41
+ --image vllm/vllm-openai \
42
+ https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
43
+ davanstrien/ModernBERT-base-is-new-arxiv-dataset \
44
+ username/input-dataset \
45
+ username/output-dataset \
46
+ --inference-column text \
47
+ --batch-size 100000
48
+ ```
49
+
50
+ ### generate-responses.py
51
+
52
+ Generate responses for chat-formatted prompts using generative LLMs (e.g., Llama, Qwen, Mistral) with vLLM's high-performance inference engine.
53
+
54
+ **Features:**
55
+ - πŸ’¬ Automatic chat template application
56
+ - πŸ”€ Multi-GPU tensor parallelism support
57
+ - πŸ“ Smart filtering for prompts exceeding context length
58
+ - πŸ“Š Comprehensive dataset cards with generation metadata
59
+ - ⚑ HF Transfer enabled for fast model downloads
60
+ - πŸŽ›οΈ Full control over sampling parameters
61
+
62
+ **Usage:**
63
+ ```bash
64
+ # Local execution with default Qwen model
65
+ uv run generate-responses.py \
66
+ username/input-dataset \
67
+ username/output-dataset \
68
+ --messages-column messages \
69
+ --max-tokens 1024
70
+
71
+ # With custom model and parameters
72
+ uv run generate-responses.py \
73
+ username/input-dataset \
74
+ username/output-dataset \
75
+ --model-id meta-llama/Llama-3.1-8B-Instruct \
76
+ --temperature 0.9 \
77
+ --top-p 0.95 \
78
+ --max-model-len 8192
79
+ ```
80
+
81
+ **HF Jobs execution (multi-GPU):**
82
+ ```bash
83
+ hf jobs uv run \
84
+ --flavor l4x4 \
85
+ --image vllm/vllm-openai \
86
+ -e UV_PRERELEASE=if-necessary \
87
+ -e HF_TOKEN=hf_*** \
88
+ https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \
89
+ davanstrien/cards_with_prompts \
90
+ davanstrien/test-generated-responses \
91
+ --model-id Qwen/Qwen3-30B-A3B-Instruct-2507 \
92
+ --gpu-memory-utilization 0.9 \
93
+ --max-tokens 600 \
94
+ --max-model-len 8000
95
  ```
96
 
97
  ## 🎯 Requirements
 
104
  ## πŸš€ Performance Tips
105
 
106
  ### GPU Selection
107
+ - **L4 GPU** (`--flavor l4x1`): Best value for classification and smaller models
108
+ - **L4x4** (`--flavor l4x4`): Multi-GPU setup for large models (30B+ parameters)
109
+ - **A10 GPU** (`--flavor a10g-large`): Higher memory for larger models
110
+ - **A100** (`--flavor a100-large`): Maximum performance for demanding workloads
111
+ - Adjust batch size and tensor parallelism based on GPU configuration
112
 
113
  ### Batch Sizes
114
+ - **Classification**: Start with 10,000 locally, up to 100,000 on HF Jobs
115
+ - **Generation**: vLLM handles batching automatically - no manual configuration needed
116
+
117
+ ### Multi-GPU Tensor Parallelism
118
+ - Auto-detects available GPUs by default
119
+ - Use `--tensor-parallel-size` to manually specify
120
+ - Required for models larger than single GPU memory (e.g., 30B+ models)
121
+
122
+ ### Handling Long Contexts
123
+ The generate-responses.py script includes smart prompt filtering:
124
+ - **Default behavior**: Skips prompts exceeding max_model_len
125
+ - **Use `--max-model-len`**: Limit context to reduce memory usage
126
+ - **Use `--no-skip-long-prompts`**: Fail on long prompts instead of skipping
127
+ - Skipped prompts receive empty responses and are logged
128
 
129
  ## πŸ“š About vLLM
130
 
 
143
  - **Direct execution**: Run from local files or URLs
144
 
145
  ### Dependencies
146
+ Scripts use UV's inline metadata for automatic dependency management:
147
  ```python
148
  # /// script
149
  # requires-python = ">=3.10"
150
+ # dependencies = [
151
+ # "datasets",
152
+ # "flashinfer-python",
153
+ # "huggingface-hub[hf_transfer]",
154
+ # "torch",
155
+ # "transformers",
156
+ # "vllm",
157
+ # ]
158
  # ///
159
  ```
160
 
161
+ For bleeding-edge features, use the `UV_PRERELEASE=if-necessary` environment variable to allow pre-release versions when needed.
162
+
163
  ### Docker Image
164
+ For HF Jobs, we recommend the official vLLM Docker image: `vllm/vllm-openai`
165
 
166
  This image includes:
167
  - Pre-installed CUDA libraries
 
169
  - UV package manager
170
  - Optimized for GPU inference
171
 
172
+ ### Environment Variables
173
+ - `HF_TOKEN`: Your Hugging Face authentication token (auto-detected if logged in)
174
+ - `UV_PRERELEASE=if-necessary`: Allow pre-release packages when required
175
+ - `HF_HUB_ENABLE_HF_TRANSFER=1`: Automatically enabled for faster downloads
176
+
177
  ## πŸ“ Contributing
178
 
179
  Have a vLLM script to share? We welcome contributions that: