user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-05-27 22:20:31
| body
stringlengths 1
173k
| issue_number
int64 1
3.5k
| __index_level_0__
int64 0
10.1k
|
---|---|---|---|---|
qgallouedec | 2025-03-22T18:22:40 | ## Benchmark truncate
```python
import timeit
import numpy as np
from datasets import Dataset
from trl.data_utils import truncate_dataset
def truncate_examples(example, max_length):
return {key: example[key][:max_length] for key in ["input_ids", "attention_mask"]}
# Create a larger dataset with sequence lengths following a gamma distribution
num_samples = 10_000
# Generate sequence lengths following a gamma distribution
seq_lengths = np.random.gamma(shape=5, scale=20, size=num_samples) # mean will be 100
seq_lengths = np.clip(seq_lengths, 10, None).astype(int) # Clip to [10, inf)
# Generate input sequences with random lengths based on gamma distribution
examples = {
"input_ids": [list(range(length)) for length in seq_lengths],
"attention_mask": [[1] * length for length in seq_lengths],
}
dataset = Dataset.from_dict(examples)
max_length = 128 # Set a fixed truncation length
# Benchmark truncate_dataset
time_truncate_dataset = timeit.timeit(lambda: truncate_dataset(dataset, max_length), number=10)
# Benchmark dataset.map with truncate_examples
time_truncate_examples = timeit.timeit(
lambda: dataset.map(truncate_examples, batched=True, fn_kwargs={"max_length": max_length}), number=10
)
print(f"truncate_dataset time: {time_truncate_dataset:.4f} seconds")
print(f"dataset.map(truncate_examples) time: {time_truncate_examples:.4f} seconds")
print(f"Speedup: {time_truncate_examples / time_truncate_dataset:.2f}x")
```
```
truncate_dataset time: 0.0611 seconds
dataset.map(truncate_examples) time: 6.3807 seconds
Speedup: 104.47x
``` | 3,009 | 1,319 |
qgallouedec | 2025-03-05T17:22:33 | Thanks for reporting. I can't reproduce right now. Can you try to provide the full code with a dataset and a model that allow to reproduce? Also, try downgrading to vLLM 0.7.2 and pull the latests commit from trl. Looking forward to know if it solves the issue. | 3,008 | 1,320 |
iamansinha | 2025-03-12T08:24:20 | @qgallouedec Thanks for your reply!
[Line 705 of grpo_trainer.py](https://github.com/huggingface/trl/blob/3f0695a4ca6f27bd1b7d0280c71960e7aff0d298/trl/trainer/grpo_trainer.py#L705):
`device = self.accelerator.device` was giving just `"cuda"`.
So, I was able to patch the error by manually setting `device = 'cuda:0'` before Line 751.
I found out that I was facing this problem only with 2xA100 setup, and not with another machine with 4xA100. So it might be my machine specific issue if you are unable to reproduce this error. So, closing this issue for now. | 3,008 | 1,321 |
luckyyangrun | 2025-03-18T06:27:34 | i face the same issue with 2*4090 | 3,008 | 1,322 |
Vanchrn | 2025-03-22T03:26:38 | same | 3,008 | 1,323 |
HuggingFaceDocBuilderDev | 2025-03-03T18:28:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,003 | 1,324 |
OctoSabercat | 2025-03-03T17:38:34 | @bot /style | 3,002 | 1,325 |
HelloWorldLTY | 2025-03-03T20:04:50 | Hi, did you try the model and have any ideas? Thanks. | 2,999 | 1,326 |
tastelikefeet | 2025-03-14T02:43:39 | May be you can try our framework based on trl: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/full_vllm_qwenvl.sh
We support train a 72B model with 4 A100 GPUs:
https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/train_72b_4gpu.sh | 2,999 | 1,327 |
Wangbiao2 | 2025-03-15T10:27:54 | > May be you can try our framework based on trl: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/full_vllm_qwenvl.sh We support train a 72B model with 4 A100 GPUs: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/train_72b_4gpu.sh
Thank you! | 2,999 | 1,328 |
pxyWaterMoon | 2025-03-02T15:20:08 | I met the same problem while trianing Alpaca-7B with GRPO on A100,the trl environments are as follows:
```
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.11.11
- TRL version: 0.16.0.dev0
- PyTorch version: 2.6.0
- CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB
- Transformers version: 4.49.0
- Accelerate version: 1.3.0
- Accelerate config: not found
- Datasets version: 3.3.0
- HF Hub version: 0.28.1
- bitsandbytes version: not installed
- DeepSpeed version: 0.16.3
- Diffusers version: not installed
- Liger-Kernel version: not installed
- LLM-Blender version: not installed
- OpenAI version: not installed
- PEFT version: not installed
- vLLM version: not installed
```
| 2,996 | 1,329 |
zsychina | 2025-03-02T18:11:46 | Another report
```bash
0%| | 2/87543 [00:18<223:50:10, 9.20s/it]../aten/src/ATen/native/cuda/TensorCompare.cu:110: _assert_async_cuda_kernel: block: [0,0,0], thread: [0,0,0] Assertion `probability tensor contains either `inf`, `nan` or element < 0` failed.
Traceback (most recent call last):
File "/home/zhusiyuan/test_trl/example.py", line 27, in <module>
trainer.train()
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 3692, in training_step
inputs = self._prepare_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/trl/trainer/grpo_trainer.py", line 564, in _prepare_inputs
prompt_completion_ids = unwrapped_model.generate(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2223, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 3200, in _sample
while self._has_unfinished_sequences(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2401, in _has_unfinished_sequences
elif this_peer_finished:
^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Traceback (most recent call last):
File "/home/zhusiyuan/test_trl/example.py", line 27, in <module>
trainer.train()
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 3692, in training_step
inputs = self._prepare_inputs(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/trl/trainer/grpo_trainer.py", line 564, in _prepare_inputs
prompt_completion_ids = unwrapped_model.generate(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2223, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 3200, in _sample
while self._has_unfinished_sequences(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2401, in _has_unfinished_sequences
elif this_peer_finished:
^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
``` | 2,996 | 1,330 |
zsychina | 2025-03-03T06:56:47 | Guys, I think I found the problem.
For distributed training, it has to be
```
accelerate lauch grpo_example.py
```
while
```
python -u grpo_example.py
```
is ok for single gpu training, but may cause above errors in distributed training.
| 2,996 | 1,331 |
qgallouedec | 2025-03-01T08:41:02 | Indeed. This comes from https://github.com/huggingface/trl/pull/2881.
Our idea on this is, unless we find that it gives worst results, we should align with classsical loss normalization (global).
Have you compared the two options? If so, the results would be very useful. | 2,995 | 1,332 |
tchang1997 | 2025-03-03T21:30:49 | On a related note, is there a reason why the per token loss is globally normalized (L950 of [`grpo_trainer.py`](https://github.com/huggingface/trl/blob/7442d42c21697fd6c0998a75e7478ed4b40490be/trl/trainer/grpo_trainer.py)), but the KL term continues to use per-sequence normalization (L956 of [`grpo_trainer.py`](https://github.com/huggingface/trl/blob/7442d42c21697fd6c0998a75e7478ed4b40490be/trl/trainer/grpo_trainer.py))?
Looks like the [GRPO paper (Eq. 3)](https://arxiv.org/pdf/2402.03300) sequence-normalizes both (expanding the KL divergence term), so I wonder these should be consistent (i.e., both global normalization, or both sequence-norm, not a mix). | 2,995 | 1,333 |
qgallouedec | 2025-03-03T22:17:16 | Actually L956 is just logging. But you're right that it should use a consistent normalization. Would you like to open a PR to fix this line? | 2,995 | 1,334 |
tchang1997 | 2025-03-03T22:51:04 | Ah, I see that now. Anyway — opened a [PR as discussed](https://github.com/huggingface/trl/pull/3004)! | 2,995 | 1,335 |
HuggingFaceDocBuilderDev | 2025-02-28T18:45:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2993). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,993 | 1,336 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.