url
stringlengths 60
63
| repository_url
stringclasses 1
value | labels_url
stringlengths 74
77
| comments_url
stringlengths 69
72
| events_url
stringlengths 67
70
| html_url
stringlengths 48
53
| id
int64 813M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
2.48k
| title
stringlengths 3
427
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
63.2k
⌀ | reactions
dict | timeline_url
stringlengths 69
72
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/accelerate/issues/2475 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2475/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2475/comments | https://api.github.com/repos/huggingface/accelerate/issues/2475/events | https://github.com/huggingface/accelerate/pull/2475 | 2,146,640,762 | PR_kwDOEmVyfs5nhGeB | 2,475 | Fix wrong `is_namedtuple` implementation | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2475). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"pytest is bugged",
"@BenjaminBossan added a test"
] | 2024-02-21T12:30:01 | 2024-02-21T14:20:16 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2475",
"html_url": "https://github.com/huggingface/accelerate/pull/2475",
"diff_url": "https://github.com/huggingface/accelerate/pull/2475.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2475.patch",
"merged_at": null
} | As per title.
The current implementation does not work in case of nested inheritance. Example:
```python
from torch import Tensor
import torch
from typing import NamedTuple, Optional
from collections import namedtuple
class QuantTensorBase(NamedTuple):
value: Tensor
scale: Optional[Tensor]
zero_point: Optional[Tensor]
bit_width: Optional[Tensor]
signed_t: Optional[Tensor]
training_t: Optional[Tensor]
class Second(QuantTensorBase):
pass
a = QuantTensorBase(torch.tensor(1.), None, None, None, None, None)
b = Second(torch.tensor(1.), None, None, None, None, None)
point = namedtuple('Point', ['x', 'y'])
p = point(11, y=22)
def isnamedtupleinstance(x):
t = type(x)
b = t.__bases__
print("b", b)
if len(b) != 1 or b[0] != tuple:
print("here")
return False
f = getattr(t, '_fields', None)
if not isinstance(f, tuple):
print("there")
return False
return all(type(n)==str for n in f)
print("-----")
print(isnamedtupleinstance(p))
print("-----")
print(isnamedtupleinstance(a))
print("-----")
print(isnamedtupleinstance(b))
```
giving
```
-----
b (<class 'tuple'>,)
True
-----
b (<class 'tuple'>,)
True
-----
b (<class '__main__.QuantTensorBase'>,)
here
False
```
Using instead https://stackoverflow.com/questions/2166818/how-to-check-if-an-object-is-an-instance-of-a-namedtuple/62692640#62692640
cc @Giuseppe5 @nickfraser | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2475/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2475/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2474 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2474/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2474/comments | https://api.github.com/repos/huggingface/accelerate/issues/2474/events | https://github.com/huggingface/accelerate/issues/2474 | 2,146,566,618 | I_kwDOEmVyfs5_8gHa | 2,474 | how to turn off fp16 auto_cast? | {
"login": "haorannlp",
"id": 52477842,
"node_id": "MDQ6VXNlcjUyNDc3ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/52477842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haorannlp",
"html_url": "https://github.com/haorannlp",
"followers_url": "https://api.github.com/users/haorannlp/followers",
"following_url": "https://api.github.com/users/haorannlp/following{/other_user}",
"gists_url": "https://api.github.com/users/haorannlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haorannlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haorannlp/subscriptions",
"organizations_url": "https://api.github.com/users/haorannlp/orgs",
"repos_url": "https://api.github.com/users/haorannlp/repos",
"events_url": "https://api.github.com/users/haorannlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/haorannlp/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-02-21T11:54:51 | 2024-02-21T11:54:51 | null | NONE | null | null | null | i notice that the deepspeed config always set my `auto_cast=True` and this is my data
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_multinode_launcher: standard
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_offload_param_pin_memory: true
zero3_offload_optimizer_pin_memory: true
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
max_live_parameters: 1e9
max_reuse_distance: 1e9
round_robin_gradients: true
deepspeed_hostfile: /opt/tiger/hostfile
distributed_type: DEEPSPEED
fsdp_config: {}
main_training_function: main
mixed_precision: fp16
use_cpu: false
```
this is my deepspeed log:
```
[2024-02-21 19:35:40,143] [INFO] [config.py:958:print_user_config] json = {
"train_batch_size": 512,
"train_micro_batch_size_per_gpu": 64,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"nvme_path": null
},
"offload_param": {
"device": "cpu",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": inf,
"fp16": {
"enabled": true,
"auto_cast": true
},
"bf16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
}
``` | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2474/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2473 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2473/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2473/comments | https://api.github.com/repos/huggingface/accelerate/issues/2473/events | https://github.com/huggingface/accelerate/pull/2473 | 2,146,291,158 | PR_kwDOEmVyfs5nf4QX | 2,473 | [FIX] allow `Accelerator` to detect distributed type from the "LOCAL_RANK" env variable for XPU | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2473). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-21T09:50:09 | 2024-02-21T11:40:56 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2473",
"html_url": "https://github.com/huggingface/accelerate/pull/2473",
"diff_url": "https://github.com/huggingface/accelerate/pull/2473.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2473.patch",
"merged_at": null
} | ## What does this PR do?
Inspired by the newly added test `test_fsdp.py`, I found that Accelerator's default distributed type is NO on XPU with the following approach, while on NV GPU it is FSDP:
```bash
$ export ACCELERATE_USE_FSDP="true"
$ export MASTER_ADDR="localhost"
$ export MASTER_PORT="10999"
$ export RANK="0"
$ export LOCAL_RANK="0"
$ export WORLD_SIZE="1"
$ python
>>> from accelerate.accelerator import Accelerator
>>> accelerator = Accelerator()
>>> accelerator.state.distributed_type
<DistributedType.NO: 'NO'>
## on NV GPU
>>> accelerator.state.distributed_type
<DistributedType.FSDP: 'FSDP'>
```
The reason lies at [this ](https://github.com/faaany/accelerate/blob/main/src/accelerate/state.py#L254) line: currently, MUTLI_XPU/MULTI_CPU is detected by `WORLD_SIZE`, while on NPU or GPU, it is detected by `LOCAL_RANK` as shown [here](https://github.com/faaany/accelerate/blob/main/src/accelerate/state.py#L220).
To be safe, I add an "or" condition to fix this issue instead of doing refactoring for the entire code block. After the fix, the behavior on XPU aligns with that on GPU and the `test_mixed_precision` unit test in the `test_fsdp.py` can run through.
Pls have a review, thx! @muellerzr
| {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2473/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2472 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2472/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2472/comments | https://api.github.com/repos/huggingface/accelerate/issues/2472/events | https://github.com/huggingface/accelerate/pull/2472 | 2,146,042,957 | PR_kwDOEmVyfs5nfBaX | 2,472 | Update the default behavior of `zero_grad(set_to_none=None)` | {
"login": "yongchanghao",
"id": 20069446,
"node_id": "MDQ6VXNlcjIwMDY5NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/20069446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongchanghao",
"html_url": "https://github.com/yongchanghao",
"followers_url": "https://api.github.com/users/yongchanghao/followers",
"following_url": "https://api.github.com/users/yongchanghao/following{/other_user}",
"gists_url": "https://api.github.com/users/yongchanghao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongchanghao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongchanghao/subscriptions",
"organizations_url": "https://api.github.com/users/yongchanghao/orgs",
"repos_url": "https://api.github.com/users/yongchanghao/repos",
"events_url": "https://api.github.com/users/yongchanghao/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongchanghao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This is a proposed update for issue #2471",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2472). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-21T07:48:53 | 2024-02-21T11:44:10 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2472",
"html_url": "https://github.com/huggingface/accelerate/pull/2472",
"diff_url": "https://github.com/huggingface/accelerate/pull/2472.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2472.patch",
"merged_at": null
} | Now, the behavior of the wrapped optimizer is that the gradient is cleared by default when `set_to_none=None`. This aligns with `torch.optim.Optimizer` and saves memory.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @pacman100
- DeepSpeed: @pacman100
- Command Line Interface: @muellerzr
- Documentation: @muellerzr
- Core parts of the library: @muellerzr @BenjaminBossan
- Maintained examples: @muellerzr or @pacman100
--> | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2472/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2472/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2471 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2471/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2471/comments | https://api.github.com/repos/huggingface/accelerate/issues/2471/events | https://github.com/huggingface/accelerate/issues/2471 | 2,146,034,160 | I_kwDOEmVyfs5_6eHw | 2,471 | The optimizer uses argument `set_to_none=False` by default | {
"login": "yongchanghao",
"id": 20069446,
"node_id": "MDQ6VXNlcjIwMDY5NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/20069446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongchanghao",
"html_url": "https://github.com/yongchanghao",
"followers_url": "https://api.github.com/users/yongchanghao/followers",
"following_url": "https://api.github.com/users/yongchanghao/following{/other_user}",
"gists_url": "https://api.github.com/users/yongchanghao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongchanghao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongchanghao/subscriptions",
"organizations_url": "https://api.github.com/users/yongchanghao/orgs",
"repos_url": "https://api.github.com/users/yongchanghao/repos",
"events_url": "https://api.github.com/users/yongchanghao/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongchanghao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for bringing this up. I agree that there an argument to be made to use the same default as PyTorch. It appears that the switch in PyTorch was made ~1 year ago in [this PR](https://github.com/pytorch/pytorch/pull/92731) but accelerate hasn't updated the default since then.\r\n\r\nThe only minor concern I see is that this could theoretically break backwards compatibility, although I don't know if there is any practical concern. Curious what others think.",
"Thanks for the reference. Let me know whether I can help if the decision is to align with PyTorch."
] | 2024-02-21T07:42:51 | 2024-02-21T12:09:48 | null | NONE | null | null | null | The default behavior of the optimizer wrapper takes `set_to_none=False` as shown here:
https://github.com/huggingface/accelerate/blob/97d2168e5953fe7373a06c69c02c5a00a84d5344/src/accelerate/optimizer.py#L117
This may have two issues:
1. It contradicts the default behavior of PyTorch
2. It produces more memory footprint as every forward-backward would result in a gradient accumulation
Is there any consideration here to have this design?
| {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2471/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2470 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2470/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2470/comments | https://api.github.com/repos/huggingface/accelerate/issues/2470/events | https://github.com/huggingface/accelerate/issues/2470 | 2,145,978,944 | I_kwDOEmVyfs5_6QpA | 2,470 | memory bug in using accelerate with deepspeed to train diffusion models | {
"login": "zhangvia",
"id": 38352569,
"node_id": "MDQ6VXNlcjM4MzUyNTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/38352569?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangvia",
"html_url": "https://github.com/zhangvia",
"followers_url": "https://api.github.com/users/zhangvia/followers",
"following_url": "https://api.github.com/users/zhangvia/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangvia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangvia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangvia/subscriptions",
"organizations_url": "https://api.github.com/users/zhangvia/orgs",
"repos_url": "https://api.github.com/users/zhangvia/repos",
"events_url": "https://api.github.com/users/zhangvia/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangvia/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-02-21T07:03:18 | 2024-02-21T07:03:18 | null | NONE | null | null | null | ### System Info
```Shell
accelerate: 0.22.0
python:3.8.18
config yaml:
compute_environment: LOCAL_MACHINE
debug: true
deepspeed_config:
gradient_accumulation_steps: 1
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: False
zero3_save_16bit_model: False
overlap_comm: True
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 6
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
i use the example training code in [diffusers repo](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) to finetune the stable diffusion.
my train command is :
```python
accelerate launch --config_file ./deepspeed.yaml --mixed_precision="fp16" train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--resolution=256 --center_crop --random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--enable_xformers_memory_efficient_attention \
--output_dir="sd-pokemon-model"
```
when i use deepspeed stage2 to train the model, it cost about 7GB vram per gpu. however the process cost 9GB vram per gpu when use stage3. that is a bug in accelerate or deepspeed? because theoretically, stage3 should not cost more vram than stage2.
### Expected behavior
how to use stage3 to reduce memory consumption? | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2470/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2469 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2469/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2469/comments | https://api.github.com/repos/huggingface/accelerate/issues/2469/events | https://github.com/huggingface/accelerate/issues/2469 | 2,145,743,944 | I_kwDOEmVyfs5_5XRI | 2,469 | "bfloat16.enabled" needed be specified when training T5 | {
"login": "HCHCXY",
"id": 64967515,
"node_id": "MDQ6VXNlcjY0OTY3NTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/64967515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HCHCXY",
"html_url": "https://github.com/HCHCXY",
"followers_url": "https://api.github.com/users/HCHCXY/followers",
"following_url": "https://api.github.com/users/HCHCXY/following{/other_user}",
"gists_url": "https://api.github.com/users/HCHCXY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HCHCXY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HCHCXY/subscriptions",
"organizations_url": "https://api.github.com/users/HCHCXY/orgs",
"repos_url": "https://api.github.com/users/HCHCXY/repos",
"events_url": "https://api.github.com/users/HCHCXY/events{/privacy}",
"received_events_url": "https://api.github.com/users/HCHCXY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-02-21T03:37:00 | 2024-02-21T11:28:05 | null | NONE | null | null | null | I met following situation when training T5. "ValueError: bfloat16.enabled not found in kwargs. Please specify bfloat16.enabled without auto(set to correct value) in the DeepSpeed config file or pass it in kwargs."
I use transfromers==4.28.0 and accelerate===0.20.3
the variable trainer has type "transformers.trainer_seq2seq.Seq2SeqTrainer". I don't get how to pass in configurations about bfloat16 in trainer.train method. Could anyone helps?
The information are listed below:
```
File "./ds_train.py", line 378, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/hechenghua/anaconda3/envs/swiftsage/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/home/hechenghua/anaconda3/envs/swiftsage/lib/python3.8/site-packages/transformers/trainer.py", line 1659, in _inner_training_loop
model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(
File "/home/hechenghua/anaconda3/envs/swiftsage/lib/python3.8/site-packages/accelerate/accelerator.py", line 1178, in prepare
result = self._prepare_deepspeed(*args)
File "/home/hechenghua/anaconda3/envs/swiftsage/lib/python3.8/site-packages/accelerate/accelerator.py", line 1486, in _prepare_deepspeed
deepspeed_plugin.deepspeed_config_process(must_match=False, **config_kwargs)
File "/home/hechenghua/anaconda3/envs/swiftsage/lib/python3.8/site-packages/accelerate/utils/dataclasses.py", line 624, in deepspeed_config_process
self.deepspeed_config_process(
File "/home/hechenghua/anaconda3/envs/swiftsage/lib/python3.8/site-packages/accelerate/utils/dataclasses.py", line 628, in deepspeed_config_process
self.fill_match(prefix + key, mismatches, must_match=must_match, **kwargs)
File "/home/hechenghua/anaconda3/envs/swiftsage/lib/python3.8/site-packages/accelerate/utils/dataclasses.py", line 603, in fill_match
raise ValueError(
ValueError: bfloat16.enabled not found in kwargs. Please specify bfloat16.enabled without auto(set to correct value) in the DeepSpeed config file or pass it in kwargs.
```
| {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2469/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2468 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2468/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2468/comments | https://api.github.com/repos/huggingface/accelerate/issues/2468/events | https://github.com/huggingface/accelerate/issues/2468 | 2,145,672,792 | I_kwDOEmVyfs5_5F5Y | 2,468 | main_process_ip not working | {
"login": "asdfry",
"id": 39879672,
"node_id": "MDQ6VXNlcjM5ODc5Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39879672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asdfry",
"html_url": "https://github.com/asdfry",
"followers_url": "https://api.github.com/users/asdfry/followers",
"following_url": "https://api.github.com/users/asdfry/following{/other_user}",
"gists_url": "https://api.github.com/users/asdfry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asdfry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asdfry/subscriptions",
"organizations_url": "https://api.github.com/users/asdfry/orgs",
"repos_url": "https://api.github.com/users/asdfry/repos",
"events_url": "https://api.github.com/users/asdfry/events{/privacy}",
"received_events_url": "https://api.github.com/users/asdfry/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-02-21T02:42:00 | 2024-02-21T02:42:49 | null | NONE | null | null | null | ### System Info
```Shell
- `Accelerate` version: 0.25.0
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.2.0+cu121 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 110.32 GB
- GPU type: NVIDIA L4
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 2
- main_process_ip: 192.168.10.161
- main_process_port: 1040
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'deepspeed_hostfile': '/root/hostfile', 'deepspeed_multinode_launcher': 'pdsh', 'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'none', 'offload_param_device': 'cpu', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, I'm trying multi-node training with accelerate+deepspeed.
During the broadcast before the start of the training, an error occurred regarding the network interface.
I set 192.168.10.161 in the main_process_ip of the accelerate config, but runner.py targets 192.168.121.122. It is master node's first network interface and that can't communicate between nodes.
Is there a way to set the master_addr used by runner.py in my environment?
I would greatly appreciate your assistance.
![issue](https://github.com/huggingface/accelerate/assets/39879672/409237d6-bfba-4f5d-95b2-5628d7589244)
### Expected behavior
runner.py targets 192.168.10.161 | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2468/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2467 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2467/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2467/comments | https://api.github.com/repos/huggingface/accelerate/issues/2467/events | https://github.com/huggingface/accelerate/pull/2467 | 2,145,425,767 | PR_kwDOEmVyfs5nc5j4 | 2,467 | Fix TPU with new `XLA` device type | {
"login": "will-cromar",
"id": 15197278,
"node_id": "MDQ6VXNlcjE1MTk3Mjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15197278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/will-cromar",
"html_url": "https://github.com/will-cromar",
"followers_url": "https://api.github.com/users/will-cromar/followers",
"following_url": "https://api.github.com/users/will-cromar/following{/other_user}",
"gists_url": "https://api.github.com/users/will-cromar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/will-cromar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/will-cromar/subscriptions",
"organizations_url": "https://api.github.com/users/will-cromar/orgs",
"repos_url": "https://api.github.com/users/will-cromar/repos",
"events_url": "https://api.github.com/users/will-cromar/events{/privacy}",
"received_events_url": "https://api.github.com/users/will-cromar/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2467). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> * Don't check the `xm.xla_device` if we don't need to know the device type in `is_torch_xla_available`. Calling `xm.xla_device` before `xmp.spawn` causes issues. This causes `torch_xla` to initialize the runtime parent process, reserving some space on GPU that can't be used by the child processes and causing TPU workloads to outright crash (message below). (Can we just check `torch_xla.runtime.device_type()` instead? @anw90)\r\n\r\nSorry for the code that breaks the task on TPU. In one of my earliest versions, I checked the device_type using the `PJRT_DEVICE` value in `is_torch_xla_available`. Later, I changed it to the current implementation to decouple it from outside environments. I think it's okay to use `torch_xla.runtime.device_type()` to check the device type if there is a crash on TPU for the current implementation. \r\n"
] | 2024-02-20T22:51:14 | 2024-02-21T08:13:09 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2467",
"html_url": "https://github.com/huggingface/accelerate/pull/2467",
"diff_url": "https://github.com/huggingface/accelerate/pull/2467.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2467.patch",
"merged_at": null
} | # What does this PR do?
#2176 replaces the `TPU` device type with `XLA`, letting us use GPUs with `accelerate` now :confetti_ball:
This PR fixes some issues that pop up on TPU after that PR:
- Don't check the `xm.xla_device` if we don't need to know the device type in `is_torch_xla_available`. Calling `xm.xla_device` before `xmp.spawn` causes issues. This causes `torch_xla` to initialize the runtime parent process, reserving some space on GPU that can't be used by the child processes and causing TPU workloads to outright crash (message below). (Can we just check `torch_xla.runtime.device_type()` instead? @anw90)
- Fix menu of options in `accelerate config` to offer `XLA` as an option. Selecting `TPU` causes an error because that device type no longer exists.
- Allow bf16 mixed precision on TPU. Matches old behavior before #2176.
Currently, running `accelerate` on TPU causes this crash due to the first issue:
```
...
F0000 00:00:1708382221.197251 23274 pjrt_registry.cc:117] Non-OK-status: pjrt::LoadPjrtPlugin("tpu", tpu_library_path).status() status: ALREADY_EXISTS: PJRT_Api already exists for device type tpu
...
```
Tested `accelerate test` on TPU v4-8.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @muellerzr @anw90 @vanbasten23 | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2467/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2466 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2466/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2466/comments | https://api.github.com/repos/huggingface/accelerate/issues/2466/events | https://github.com/huggingface/accelerate/pull/2466 | 2,145,287,779 | PR_kwDOEmVyfs5ncbEv | 2,466 | [docs] Divide training and inference | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2466). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-20T21:11:05 | 2024-02-20T22:22:22 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2466",
"html_url": "https://github.com/huggingface/accelerate/pull/2466",
"diff_url": "https://github.com/huggingface/accelerate/pull/2466.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2466.patch",
"merged_at": null
} | ⏳ WIP ⏳
Separate training and inference docs for easier navigation. | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2466/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2466/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2465 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2465/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2465/comments | https://api.github.com/repos/huggingface/accelerate/issues/2465/events | https://github.com/huggingface/accelerate/pull/2465 | 2,145,140,466 | PR_kwDOEmVyfs5nb6-w | 2,465 | [docs] Accelerator API | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2465). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-20T19:43:23 | 2024-02-20T20:22:36 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2465",
"html_url": "https://github.com/huggingface/accelerate/pull/2465",
"diff_url": "https://github.com/huggingface/accelerate/pull/2465.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2465.patch",
"merged_at": null
} | Related to #2456. This PR:
➖ removes redundant content already covered in the Quicktour or tutorials from the `Accelerator` API page, making finding a specific method or function in the API easier.
➕ adds docstrings for `GradientAccumulationPlugin` because they're currently hidden in the metadata `help` field. | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2465/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2464 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2464/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2464/comments | https://api.github.com/repos/huggingface/accelerate/issues/2464/events | https://github.com/huggingface/accelerate/issues/2464 | 2,145,098,889 | I_kwDOEmVyfs5_25yJ | 2,464 | DeepSpeed tests fail with PyTest 8.0.1 | {
"login": "loadams",
"id": 114770087,
"node_id": "U_kgDOBtdApw",
"avatar_url": "https://avatars.githubusercontent.com/u/114770087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/loadams",
"html_url": "https://github.com/loadams",
"followers_url": "https://api.github.com/users/loadams/followers",
"following_url": "https://api.github.com/users/loadams/following{/other_user}",
"gists_url": "https://api.github.com/users/loadams/gists{/gist_id}",
"starred_url": "https://api.github.com/users/loadams/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loadams/subscriptions",
"organizations_url": "https://api.github.com/users/loadams/orgs",
"repos_url": "https://api.github.com/users/loadams/repos",
"events_url": "https://api.github.com/users/loadams/events{/privacy}",
"received_events_url": "https://api.github.com/users/loadams/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is a transformers issue specifically as seen in the trace. They may have fixed this on main but would recommend asking there instead. ",
"Yes, you're correct, thanks @muellerzr. Seems it isn't fixed in main so I'll open an issue there. Thanks!"
] | 2024-02-20T19:16:02 | 2024-02-20T20:46:17 | 2024-02-20T20:46:16 | NONE | null | null | null | ### System Info
```Shell
In DeepSpeed we the accelerate tests are failing when updating to PyTest 8.0.1 with the following error:
______________ ERROR collecting tests/deepspeed/test_deepspeed.py ______________
ImportError while importing test module '/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/accelerate/tests/deepspeed/test_deepspeed.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../unit-test-venv/lib/python3.8/site-packages/_pytest/python.py:538: in importtestmodule
mod = import_path(path, mode=importmode, root=config.rootpath)
../unit-test-venv/lib/python3.8/site-packages/_pytest/pathlib.py:566: in import_path
importlib.import_module(module_name)
/opt/conda/envs/ptca/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
../unit-test-venv/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:178: in exec_module
exec(co, module.__dict__)
tests/deepspeed/test_deepspeed.py:26: in <module>
from transformers.testing_utils import mockenv_context
../unit-test-venv/lib/python3.8/site-packages/transformers/testing_utils.py:129: in <module>
from _pytest.doctest import (
E ImportError: cannot import name 'import_path' from '_pytest.doctest' (/tmp/actions-runner/_work/DeepSpeed/DeepSpeed/unit-test-venv/lib/python3.8/site-packages/_pytest/doctest.py)
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 4.71s ===============================
```
Sample build [here](https://github.com/microsoft/DeepSpeed/actions/runs/7977730884/job/21781270161?pr=5164#step:7:391).
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
Run accelerate tests in DeepSpeed repository, outlined [here](https://github.com/microsoft/DeepSpeed/blob/master/.github/workflows/nv-accelerate-v100.yml#L45)
### Expected behavior
DeepSpeed tests should pass. | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2464/timeline | null | completed | false |
https://api.github.com/repos/huggingface/accelerate/issues/2463 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2463/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2463/comments | https://api.github.com/repos/huggingface/accelerate/issues/2463/events | https://github.com/huggingface/accelerate/issues/2463 | 2,144,361,922 | I_kwDOEmVyfs5_0F3C | 2,463 | How to initialize Accelerator twice but with different setup within the same code ? | {
"login": "soneyahossain",
"id": 54991949,
"node_id": "MDQ6VXNlcjU0OTkxOTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/54991949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soneyahossain",
"html_url": "https://github.com/soneyahossain",
"followers_url": "https://api.github.com/users/soneyahossain/followers",
"following_url": "https://api.github.com/users/soneyahossain/following{/other_user}",
"gists_url": "https://api.github.com/users/soneyahossain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soneyahossain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soneyahossain/subscriptions",
"organizations_url": "https://api.github.com/users/soneyahossain/orgs",
"repos_url": "https://api.github.com/users/soneyahossain/repos",
"events_url": "https://api.github.com/users/soneyahossain/events{/privacy}",
"received_events_url": "https://api.github.com/users/soneyahossain/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"It is generally not recommended to initialize `Accelerator` twice in the same process. You could try if calling `AcceleratorState._reset_state()` works for you but beware that this is outside of how accelerate should be used normally."
] | 2024-02-20T13:17:26 | 2024-02-21T11:34:58 | null | NONE | null | null | null | ### System Info
```Shell
Hello I want to initialize accelerate once for the training and another time for the inference.
Looks like it does not work and the error message is not clear. Is there a way to reset the previously initialized accelerate and then initialize with inference setup?
For training I am doing :
accelerator = Accelerator(kwargs_handlers=[process_group_kwargs])
model,test_loader, valid_loader, optimizer, scheduler = accelerator.prepare(
model, test_loader, valid_loader, optimizer, scheduler)
For inference I want to do: accelerator = Accelerator()
model, valid_loader, optimizer = eval_accelerator.prepare(model, valid_loader, optimizer)
For inference, I do no want to use optimizer but I get error as I am using zero_stage: 1, So I used the optimizer I used during training. But then I was getting batch size error for the valid set then I prepare the valid loader one more time after initializing the Accelerator. Still during inference I am getting error on the preparation.
Any idea how to fix this?
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
1. Initialize Accelerator for training
2. Once the training is done, initialize again for the inference.
### Expected behavior
I just want to prepare the accelerate for the inference task once the training is done. | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2463/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2462 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2462/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2462/comments | https://api.github.com/repos/huggingface/accelerate/issues/2462/events | https://github.com/huggingface/accelerate/issues/2462 | 2,143,978,139 | I_kwDOEmVyfs5_yoKb | 2,462 | CUDA out of memory - Knowledge Distillation | {
"login": "ninagroot",
"id": 66623281,
"node_id": "MDQ6VXNlcjY2NjIzMjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/66623281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ninagroot",
"html_url": "https://github.com/ninagroot",
"followers_url": "https://api.github.com/users/ninagroot/followers",
"following_url": "https://api.github.com/users/ninagroot/following{/other_user}",
"gists_url": "https://api.github.com/users/ninagroot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ninagroot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ninagroot/subscriptions",
"organizations_url": "https://api.github.com/users/ninagroot/orgs",
"repos_url": "https://api.github.com/users/ninagroot/repos",
"events_url": "https://api.github.com/users/ninagroot/events{/privacy}",
"received_events_url": "https://api.github.com/users/ninagroot/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"It looks like the `torch.cat` of the two tensors tries to allocate quite a bit of memory. Is this plausible, could you check the size of these two tensors?\r\n\r\nAlso, just in case you're not aware, but since you're using DDP, you need a copy of the model on each GPU, i.e. the fact that you have 2 GPUs does not really reduce the amount of memory required for training. As a test, if you run this only on one GPU, do you get the same OOM error?"
] | 2024-02-20T09:52:16 | 2024-02-20T11:34:53 | null | NONE | null | null | null | ### System Info
```Shell
- `Accelerate` version: 0.26.1
- Platform: Linux-4.18.0-372.57.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.18
- Numpy version: 1.26.1
- PyTorch version (GPU?): 2.1.2+cu121 (False)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 251.38 GB
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
The code is available in the google folder below:
https://drive.google.com/drive/folders/18zpr_RuDY59Bu94M31z4492GYUlwoS48?usp=share_link
I run the new_ddp.py file using the jobscript_new_ddp file. With the command: sbatch jobscript_new_ddp
I keep getting the CUDA error, even though I run the code using a very small dataset and two GPUs.
### Expected behavior
```
I get the following error:
File "/gpfs/home2/ngroot/new_ddp.py", line 174, in <module>
trainer.train()
File "/home/ngroot/anaconda3/envs/llmke/lib/python3.9/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/home/ngroot/anaconda3/envs/llmke/lib/python3.9/site-packages/transformers/trainer.py", line 1944, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ngroot/anaconda3/envs/llmke/lib/python3.9/site-packages/transformers/trainer.py", line 2291, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/home/ngroot/anaconda3/envs/llmke/lib/python3.9/site-packages/transformers/trainer.py", line 3095, in evaluate
output = eval_loop(
File "/home/ngroot/anaconda3/envs/llmke/lib/python3.9/site-packages/transformers/trainer.py", line 3310, in evaluation_loop
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, padding_index=-100)
File "/home/ngroot/anaconda3/envs/llmke/lib/python3.9/site-packages/transformers/trainer_pt_utils.py", line 123, in nested_concat
return torch_pad_and_concatenate(tensors, new_tensors, padding_index=padding_index)
File "/home/ngroot/anaconda3/envs/llmke/lib/python3.9/site-packages/transformers/trainer_pt_utils.py", line 82, in torch_pad_and_concatenate
return torch.cat((tensor1, tensor2), dim=0)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.57 GiB. GPU 0 has a total capacty of 39.39 GiB of which 5.26 GiB is free. Including non-PyTorch memory, this process has 34.12 GiB memory in use. Of the allocated memory 31.64 GiB is allocated by PyTorch, and 1.73 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
| {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2462/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2461 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2461/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2461/comments | https://api.github.com/repos/huggingface/accelerate/issues/2461/events | https://github.com/huggingface/accelerate/pull/2461 | 2,142,329,622 | PR_kwDOEmVyfs5nSUbC | 2,461 | Fix the pytest version to be less than 8.0.0 | {
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2461). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Not sure whom to ping, as Zach is off this week, maybe @ydshieh with reference to https://github.com/huggingface/transformers/pull/28758."
] | 2024-02-19T13:04:16 | 2024-02-21T14:29:02 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2461",
"html_url": "https://github.com/huggingface/accelerate/pull/2461",
"diff_url": "https://github.com/huggingface/accelerate/pull/2461.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2461.patch",
"merged_at": null
} | # What does this PR do?
Fixes the pytest version.
We're getting errors such as this one:
https://github.com/huggingface/accelerate/actions/runs/7958684877/job/21725397566?pr=2450 | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2461/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2460 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2460/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2460/comments | https://api.github.com/repos/huggingface/accelerate/issues/2460/events | https://github.com/huggingface/accelerate/pull/2460 | 2,140,799,061 | PR_kwDOEmVyfs5nNLFa | 2,460 | Support deepspeed dynamo | {
"login": "oraluben",
"id": 5031346,
"node_id": "MDQ6VXNlcjUwMzEzNDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5031346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oraluben",
"html_url": "https://github.com/oraluben",
"followers_url": "https://api.github.com/users/oraluben/followers",
"following_url": "https://api.github.com/users/oraluben/following{/other_user}",
"gists_url": "https://api.github.com/users/oraluben/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oraluben/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oraluben/subscriptions",
"organizations_url": "https://api.github.com/users/oraluben/orgs",
"repos_url": "https://api.github.com/users/oraluben/repos",
"events_url": "https://api.github.com/users/oraluben/events{/privacy}",
"received_events_url": "https://api.github.com/users/oraluben/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-02-18T07:44:43 | 2024-02-18T09:46:23 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2460",
"html_url": "https://github.com/huggingface/accelerate/pull/2460",
"diff_url": "https://github.com/huggingface/accelerate/pull/2460.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2460.patch",
"merged_at": null
} | # What does this PR do?
This is a PR that tries to respect https://github.com/microsoft/DeepSpeed/pull/4878 in 🤗 accelerate/transformers.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100 since it's deepspeed related, and @tohtana since you implemented the deepspeed part.
| {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2460/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2459 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2459/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2459/comments | https://api.github.com/repos/huggingface/accelerate/issues/2459/events | https://github.com/huggingface/accelerate/issues/2459 | 2,140,728,187 | I_kwDOEmVyfs5_mOt7 | 2,459 | Accelerate not working when setting subset of GPUs as visible CUDA devices | {
"login": "MrRobot2211",
"id": 23513768,
"node_id": "MDQ6VXNlcjIzNTEzNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/23513768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrRobot2211",
"html_url": "https://github.com/MrRobot2211",
"followers_url": "https://api.github.com/users/MrRobot2211/followers",
"following_url": "https://api.github.com/users/MrRobot2211/following{/other_user}",
"gists_url": "https://api.github.com/users/MrRobot2211/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrRobot2211/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrRobot2211/subscriptions",
"organizations_url": "https://api.github.com/users/MrRobot2211/orgs",
"repos_url": "https://api.github.com/users/MrRobot2211/repos",
"events_url": "https://api.github.com/users/MrRobot2211/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrRobot2211/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-02-18T04:18:05 | 2024-02-18T04:18:53 | null | NONE | null | null | null | ### System Info
```Shell
/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torchvision/image.so: undefined symbol: _ZN3c1017RegisterOperatorsD1Ev'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
Copy-and-paste the text below in your GitHub issue
- `Accelerate` version: 0.27.0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- Numpy version: 1.26.4
- PyTorch version (GPU?): 2.2.0 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 125.63 GB
- GPU type: NVIDIA GeForce RTX 4090
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: False
- main_training_function: main
- downcast_bf16: False
- tpu_use_cluster: False
- tpu_use_sudo: False
I hav 1 3090 , and 2 4090 GPUs
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
I have this loop
```python
def train_ddp_accelerate(CFG, fold_id, train, output_path):
accelerator = Accelerator(split_batches=True,mixed_precision='fp16')
# accelerator = Accelerator(mixed_precision='fp16')
set_seed(CFG.seed)
device = accelerator.device #'cuda'#torch.device(CFG.device)
train_path_label, val_path_label, _, _ = get_path_label(fold_id, train_all)
train_transform, val_transform = get_transforms(CFG)
train_dataset = HMSHBACSpecDataset(**train_path_label, transform=train_transform)
val_dataset = HMSHBACSpecDataset(**val_path_label, transform=val_transform)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=CFG.batch_size,pin_memory=True, num_workers=4, shuffle=True, drop_last=True)
val_loader = torch.utils.data.DataLoader(
val_dataset, batch_size=CFG.batch_size,pin_memory=True, num_workers=4, shuffle=False, drop_last=False)
model = HMSHBACSpecModel(
model_name=CFG.model_name, pretrained=True, num_classes=6, in_channels=1)
# model = torch.nn.parallel.DataParallel(model, device_ids=[0, 1, 2])
optimizer = optim.AdamW(params=model.parameters(), lr=CFG.lr, weight_decay=CFG.weight_decay)
scheduler = lr_scheduler.OneCycleLR(
optimizer=optimizer, epochs=CFG.max_epoch,
pct_start=0.0, steps_per_epoch=len(train_loader),
max_lr=CFG.lr, div_factor=25, final_div_factor=4.0e-01
)
loss_func = KLDivLossWithLogits()
loss_func.to(device)
# loss_func = torch.nn.parallel.DataParallel(loss_func, device_ids=[0, 1, 2])
loss_func_val = KLDivLossWithLogits()
loss_func_val.to(device)
# loss_func_val = torch.nn.parallel.DataParallel(loss_func_val, device_ids=[0, 1, 2])
# Send everything through `accelerator.prepare`
train_loader, val_loader, model, optimizer,scheduler = accelerator.prepare(
train_loader, val_loader, model, optimizer,scheduler
)
best_val_loss = 1.0e+09
best_epoch = 0
train_loss = 0
# Train for a single epoch
for epoch in range(1, CFG.max_epoch + 1):
epoch_start = time()
model.train()
for batch in train_loader:
#batch = to_device(batch, device)
x, t = batch["data"], batch["target"]
optimizer.zero_grad()
with accelerator.autocast():
y = model(x)
loss = loss_func(y, t)
accelerator.backward(loss)
optimizer.step()
if not accelerator.optimizer_step_was_skipped:
scheduler.step()
train_loss += loss.detach()
train_loss /= len(train_loader)
# Evaluate
model.eval()
correct = 0
val_loss=0
with torch.no_grad():
for batch in val_loader:
x, t = batch["data"], batch["target"]
# x = to_device(x, device)
val_loss += loss_func_val(y, t).detach()
val_loss /= len(val_loader)
accelerator.wait_for_everyone()
total_val_loss = accelerator.reduce(val_loss).cpu()
total_train_loss = accelerator.reduce(train_loss).cpu()
if val_loss < best_val_loss:
best_epoch = epoch
best_val_loss = val_loss
# print("save model")
if accelerator.is_main_process:
accelerator.save_model(model, str(output_path) + f'snapshot_epoch_{epoch}')
#reduced_tensor = accelerator.reduce(process_tensor, reduction="sum")
elapsed_time = time() - epoch_start
accelerator.wait_for_everyone()
if accelerator.is_main_process:
print(
f"[epoch {epoch}] train loss: {total_train_loss: .6f}, val loss: {total_val_loss: .6f}, elapsed_time: {elapsed_time: .3f}")
accelerator.wait_for_everyone()
if epoch - best_epoch > CFG.es_patience:
if accelerator.is_main_process:
print("Early Stopping!")
accelerator.wait_for_everyone()
break
train_loss = 0
#print(f'Accuracy: {100. * correct / len(val_loader.dataset)}')
accelerator.end_training()
accelerator.clear()`
```
When running like this it runs as expected:
```python
import os
os.environ["NCCL_P2P_DISABLE"]="1"
for fold_id in FOLDS[3:]:
output_path = Path(f"fold{fold_id}")
output_path.mkdir(exist_ok=True)
print(f"[fold{fold_id}]")
notebook_launcher(train_ddp_accelerate, args=(CFG, fold_id, train, output_path), num_processes=3,mixed_precision='fp16')
```
But when running like this
```python
import os
os.environ['CUDA_DEVICE_ORDER']="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "1,2"
os.environ["NCCL_P2P_DISABLE"]="1"
for fold_id in FOLDS[3:]:
output_path = Path(f"fold{fold_id}")
output_path.mkdir(exist_ok=True)
print(f"[fold{fold_id}]")
notebook_launcher(train_ddp_accelerate, args=(CFG, fold_id, train, output_path), num_processes=2,mixed_precision='fp16')
```
I get
> ---------------------------------------------------------------------------
ProcessRaisedException Traceback (most recent call last)
File [~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:200](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:200), in notebook_launcher(function, args, num_processes, mixed_precision, use_port, master_addr, node_rank, num_nodes)
[199](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:199) try:
--> [200](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:200) start_processes(launcher, args=args, nprocs=num_processes, start_method="fork")
[201](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:201) except ProcessRaisedException as e:
File [~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:197](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:197), in start_processes(fn, args, nprocs, join, daemon, start_method)
[196](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:196) # Loop on join until it returns True or raises an exception.
--> [197](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:197) while not context.join():
[198](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:198) pass
File [~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:158](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:158), in ProcessContext.join(self, timeout)
[157](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:157) msg += original_trace
--> [158](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py:158) raise ProcessRaisedException(msg, error_index, failed_process.pid)
ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 315, in _lazy_init
queued_call()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 183, in _check_capability
capability = get_device_capability(d)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 439, in get_device_capability
prop = get_device_properties(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 457, in get_device_properties
return _get_device_properties(device) # type: ignore[name-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1704987288773/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=2, num_gpus=
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 68, in _wrap
fn(i, *args)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/utils/launch.py", line 570, in __call__
self.launcher(*args)
File "/tmp/ipykernel_1310472/2664963675.py", line 3, in train_ddp_accelerate
accelerator = Accelerator(mixed_precision='fp16')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/accelerator.py", line 378, in __init__
self.state = AcceleratorState(
^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/state.py", line 771, in __init__
PartialState(cpu, **kwargs)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/state.py", line 236, in __init__
torch.cuda.set_device(self.device)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 408, in set_device
torch._C._cuda_setDevice(device)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 321, in _lazy_init
raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1704987288773/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=2, num_gpus=
CUDA call was originally invoked at:
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel_launcher.py", line 18, in <module>
app.launch_new_instance()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/traitlets/config/application.py", line 1075, in launch_instance
app.start()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelapp.py", line 739, in start
self.io_loop.start()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/tornado/platform/asyncio.py", line 195, in start
self.asyncio_loop.run_forever()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
self._run_once()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
handle._run()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 542, in dispatch_queue
await self.process_one()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 531, in process_one
await dispatch(*args)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 437, in dispatch_shell
await result
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 359, in execute_request
await super().execute_request(stream, ident, parent)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 775, in execute_request
reply_content = await reply_content
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 446, in do_execute
res = shell.run_cell(
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/zmqshell.py", line 549, in run_cell
return super().run_cell(*args, **kwargs)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3051, in run_cell
result = self._run_cell(
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3106, in _run_cell
result = runner(coro)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
coro.send(None)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3311, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3493, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/tmp/ipykernel_1310472/3735111654.py", line 18, in <module>
import torch
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/__init__.py", line 1421, in <module>
_C._initExtension(manager_path())
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 247, in <module>
_lazy_call(_check_capability)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 244, in _lazy_call
_queued_calls.append((callable, traceback.format_stack()))
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[31], [line 6](vscode-notebook-cell:?execution_count=31&line=6)
[4](vscode-notebook-cell:?execution_count=31&line=4) output_path.mkdir(exist_ok=True)
[5](vscode-notebook-cell:?execution_count=31&line=5) print(f"[fold{fold_id}]")
----> [6](vscode-notebook-cell:?execution_count=31&line=6) notebook_launcher(train_ddp_accelerate, args=(CFG, fold_id, train, output_path), num_processes=2,mixed_precision='fp16')
File [~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:210](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:210), in notebook_launcher(function, args, num_processes, mixed_precision, use_port, master_addr, node_rank, num_nodes)
[203](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:203) raise RuntimeError(
[204](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:204) "CUDA has been initialized before the `notebook_launcher` could create a forked subprocess. "
[205](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:205) "This likely stems from an outside import causing issues once the `notebook_launcher()` is called. "
[206](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:206) "Please review your imports and test them when running the `notebook_launcher()` to identify "
[207](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:207) "which one is problematic and causing CUDA to be initialized."
[208](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:208) ) from e
[209](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:209) else:
--> [210](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:210) raise RuntimeError(f"An issue was found when launching the training: {e}") from e
[212](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:212) else:
[213](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:213) # No need for a distributed launch otherwise as it's either CPU, GPU or MPS.
[214](https://file+.vscode-resource.vscode-cdn.net/home/felipe/ssdpny0/hms-harmful-brain-activity-classification/code/~/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/launchers.py:214) if is_mps_available():
RuntimeError: An issue was found when launching the training:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 315, in _lazy_init
queued_call()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 183, in _check_capability
capability = get_device_capability(d)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 439, in get_device_capability
prop = get_device_properties(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 457, in get_device_properties
return _get_device_properties(device) # type: ignore[name-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1704987288773/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=2, num_gpus=
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/multiprocessing/spawn.py", line 68, in _wrap
fn(i, *args)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/utils/launch.py", line 570, in __call__
self.launcher(*args)
File "/tmp/ipykernel_1310472/2664963675.py", line 3, in train_ddp_accelerate
accelerator = Accelerator(mixed_precision='fp16')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/accelerator.py", line 378, in __init__
self.state = AcceleratorState(
^^^^^^^^^^^^^^^^^
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/state.py", line 771, in __init__
PartialState(cpu, **kwargs)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/accelerate/state.py", line 236, in __init__
torch.cuda.set_device(self.device)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 408, in set_device
torch._C._cuda_setDevice(device)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 321, in _lazy_init
raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1704987288773/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. device=2, num_gpus=
CUDA call was originally invoked at:
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel_launcher.py", line 18, in <module>
app.launch_new_instance()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/traitlets/config/application.py", line 1075, in launch_instance
app.start()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelapp.py", line 739, in start
self.io_loop.start()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/tornado/platform/asyncio.py", line 195, in start
self.asyncio_loop.run_forever()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
self._run_once()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
handle._run()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 542, in dispatch_queue
await self.process_one()
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 531, in process_one
await dispatch(*args)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 437, in dispatch_shell
await result
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 359, in execute_request
await super().execute_request(stream, ident, parent)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 775, in execute_request
reply_content = await reply_content
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 446, in do_execute
res = shell.run_cell(
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/ipykernel/zmqshell.py", line 549, in run_cell
return super().run_cell(*args, **kwargs)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3051, in run_cell
result = self._run_cell(
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3106, in _run_cell
result = runner(coro)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
coro.send(None)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3311, in run_cell_async
has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3493, in run_ast_nodes
if await self.run_code(code, result, async_=asy):
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/tmp/ipykernel_1310472/3735111654.py", line 18, in <module>
import torch
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/__init__.py", line 1421, in <module>
_C._initExtension(manager_path())
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 247, in <module>
_lazy_call(_check_capability)
File "/home/felipe/anaconda3/envs/cuda_12.1/lib/python3.11/site-packages/torch/cuda/__init__.py", line 244, in _lazy_call
_queued_calls.append((callable, traceback.format_stack()))
### Expected behavior
It runs with 2 GPUs as it does with 3 GPUs and 3 processes | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2459/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2458 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2458/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2458/comments | https://api.github.com/repos/huggingface/accelerate/issues/2458/events | https://github.com/huggingface/accelerate/issues/2458 | 2,140,721,128 | I_kwDOEmVyfs5_mM_o | 2,458 | Fine-tuning only doesn't work with "basic" distributed settings | {
"login": "ccruttjr",
"id": 146245010,
"node_id": "U_kgDOCLeFkg",
"avatar_url": "https://avatars.githubusercontent.com/u/146245010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ccruttjr",
"html_url": "https://github.com/ccruttjr",
"followers_url": "https://api.github.com/users/ccruttjr/followers",
"following_url": "https://api.github.com/users/ccruttjr/following{/other_user}",
"gists_url": "https://api.github.com/users/ccruttjr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ccruttjr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ccruttjr/subscriptions",
"organizations_url": "https://api.github.com/users/ccruttjr/orgs",
"repos_url": "https://api.github.com/users/ccruttjr/repos",
"events_url": "https://api.github.com/users/ccruttjr/events{/privacy}",
"received_events_url": "https://api.github.com/users/ccruttjr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Just to be sure I understand correctly, you want to use DDP and you run out of memory? Could you please paste the full error message you get?"
] | 2024-02-18T03:46:30 | 2024-02-19T11:51:03 | null | NONE | null | null | null | ### System Info
```Shell
- `Accelerate` version: 0.25.0
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Numpy version: 1.26.3
- PyTorch version (GPU?): 2.1.2 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 31.25 GB
- GPU type: NVIDIA GeForce RTX 3090 (2 of them)
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
Accelerate works when I use non-distributed training, any DeepSpeed, and FSDP. It does not, however, work with just selecting multi-gpu and putting all settings to default. They seem to be running out of VRAM, even though there should be PLENTY of space. Here are the yaml config files that worked/didn't work... followed by the code and the error statement. I tried it with and without `NCCL_P2P_DISABLE=1` to see if that changed anything but to no avail. Also, jeez is running it solo so much fast haha. I'd love to find what the issue is. I don't seem to be using up all my CPU ram or processing power- and running it solo doesn't even use half of what I need according to `nvidia-smi` and `accelerate estimate-memory` with TinyLlama.
### non-distributed (works)
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: 'NO'
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 1
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### base-distrubuted (doesn't work)
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
gpu_ids: all
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### 0 DeepSpeed ZeRO (works)
```yaml
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
gradient_accumulation_steps: 1
zero3_init_flag: false
zero_stage: 0
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### FSDP (works)
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: NO_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: 2
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: bf16
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Here's the code.
```python
import argparse
from time import time
import torch
from accelerate import Accelerator
from datasets import Dataset
from torch.optim import AdamW
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
get_linear_schedule_with_warmup,
set_seed,
)
# This allows adjusting training arguments without needing to change the code
def parse_args():
parser = argparse.ArgumentParser(description="Training script arguments.")
parser.add_argument("--batch_size", type=int, default=1,
help="Batch size for training.")
parser.add_argument("--mixed_precision", type=str,
default="bf16", help="Mixed precision type.")
parser.add_argument("--lr", type=float, default=5e-5,
help="Learning rate.")
parser.add_argument("--num_epochs", type=int, default=3,
help="Number of training epochs.")
parser.add_argument("--seed", type=int, default=None, help="Random seed.")
parser.add_argument("--num_warmup_steps", type=int,
default=100, help="Number of warm-up steps.")
parser.add_argument("--num_processes", type=int,
default=2, help="Number of gpus to use.")
parser.add_argument("--model_name", type=str,
default="TinyLlama/TinyLlama-1.1B-Chat-v1.0", help="Model to use.")
parser.add_argument("--data_location", type=str,
default="examples/preprocessed_data.json", help="File location for data.")
parser.add_argument("--save_location", type=str,
default="saved_1000", help="File location for data.")
parser.add_argument("--gradient_accumulation_steps",
type=int, default=1, help="Gradient accumulation steps.")
return parser.parse_args()
def process_dataset(json_file, tokenizer):
ds = Dataset.from_json(json_file)
def transform_example(example):
# Construct system message
system_message = f"Consult ID: {example['CONSULTID']}. Patient's age: {example['AGE_AT_CONSULT']}. Gender: {example['GENDER']}. Diagnosis Code: {example['DIAGNOSIS_CODE']}."
# Construct messages in the required format
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": example["PCP_MESSAGE"]},
{"role": "assistant", "content": example["SR_MESSAGE"]}
]
return messages
ds = ds.map(lambda x: {"formatted_chat": tokenizer.apply_chat_template(transform_example(x), tokenize=False, add_generation_prompt=False)})
return ds
def get_dataloaders(accelerator: Accelerator, batch_size, model_name, data_location, save_location):
# 1. Initialize tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
# 2. Convert JSON to readable dataset
with accelerator.main_process_first():
dataset = process_dataset(data_location, tokenizer)
accelerator.print(dataset["formatted_chat"][0])
def tokenize_function(examples):
# Tokenize, pad and truncate the 'formatted_chat' content
return tokenizer(examples["formatted_chat"], padding="max_length", truncation=True, max_length=128)
with accelerator.main_process_first():
tokenized_dataset = dataset.map(tokenize_function, batched=True)
tokenized_dataset.set_format(
"torch", columns=["input_ids", "attention_mask"])
# 4
split_datasets = tokenized_dataset.train_test_split(test_size=0.2)
tokenized_train_dataset = split_datasets["train"]
tokenized_eval_dataset = split_datasets["test"]
if accelerator.is_main_process:
print("saving tokenizer")
# Saving the tokenizer
tokenizer.save_pretrained(save_location)
print("saved tokenizer")
# 5
train_sampler = DistributedSampler(
tokenized_train_dataset, num_replicas=accelerator.num_processes, rank=accelerator.process_index, shuffle=True
)
eval_sampler = DistributedSampler(
tokenized_eval_dataset, num_replicas=accelerator.num_processes, rank=accelerator.process_index, shuffle=False
)
# 6
train_dataloader = DataLoader(
tokenized_train_dataset,
batch_size=batch_size,
drop_last=True,
sampler=train_sampler
)
eval_dataloader = DataLoader(
tokenized_eval_dataset,
batch_size=batch_size*2,
drop_last=(accelerator.mixed_precision == "fp8"),
sampler=eval_sampler
)
accelerator.print("returning dataloaders")
return train_dataloader, eval_dataloader
# 1. Initialize accelerator with mixed percision and define training parameters via arguments given in command line
# 2. Sets seed (if given as a command line argument) for reproducability
# 3. Get dataloaders
# 4. Initialize more training perameters and "prepare"/optimize them via Accelerate
# 5. Train/fine-tune model with new data & set parameters using FSDP
# 6. Evaluate quality of trainer for that epoch
# 7. Have the first GPU save the newly fine-tuned dataset
def training_function(args):
# 1
accelerator = Accelerator(mixed_precision=args.mixed_precision,
gradient_accumulation_steps=args.gradient_accumulation_steps)
accelerator.print("set acceleraror")
lr = args.lr
num_epochs = args.num_epochs
batch_size = args.batch_size
num_warmup_steps = args.num_warmup_steps
# 2
if args.seed:
set_seed(args.seed)
# 3
train_dataloader, eval_dataloader = get_dataloaders(
accelerator, batch_size, args.model_name, args.data_location, args.save_location)
accelerator.print("set dataloaders")
# 4
# Instantiate the model (we build the model here so that the seed also control new weights initialization)
model = AutoModelForCausalLM.from_pretrained(args.model_name)
# model = accelerator.prepare(model)
accelerator.print("set model")
optimizer = AdamW(params=model.parameters(), lr=lr)
accelerator.print("set optimizer")
# Instantiate scheduler
lr_scheduler = get_linear_schedule_with_warmup(
optimizer=optimizer,
num_warmup_steps=num_warmup_steps,
num_training_steps=(len(train_dataloader) *
num_epochs) // args.gradient_accumulation_steps
)
accelerator.print("set lr_scheduler")
# Prepare everything
# There is no specific order to remember, we just need to unpack the objects in the same order we gave them to the
# prepare method.
accelerator.wait_for_everyone()
accelerator.print("preparing!")
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare(
model, optimizer, train_dataloader, eval_dataloader, lr_scheduler
)
accelerator.print("preprared stuff")
# Initialize logging variables
total_train_loss = 0
total_eval_loss = 0
# 5
# Now we train the model
for epoch in range(num_epochs):
accelerator.print("training")
model.train()
total_train_loss = 0
for batch in tqdm(train_dataloader, desc="Training"):
with accelerator.accumulate(model):
# Process the batch
inputs = {k: v.to(accelerator.device)
for k, v in batch.items()}
if "labels" not in inputs:
inputs["labels"] = inputs["input_ids"]
outputs = model(**inputs)
loss = outputs.loss
total_train_loss += loss.item()
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
accelerator.wait_for_everyone()
# 6
# Evaluation loop after each training epoch
model.eval()
total_eval_loss = 0
for batch in tqdm(eval_dataloader, "Evaluating"):
with torch.no_grad():
inputs = {k: v.to(accelerator.device)
for k, v in batch.items()}
if "labels" not in inputs:
inputs["labels"] = inputs["input_ids"]
outputs = model(**inputs)
loss = outputs.loss
total_eval_loss += loss.item()
accelerator.wait_for_everyone()
# # Log the average losses
avg_train_loss = total_train_loss / len(train_dataloader)
avg_eval_loss = total_eval_loss / len(eval_dataloader)
print(
f"Epoch: {epoch}, Average Training Loss: {avg_train_loss}, Average Evaluation Loss: {avg_eval_loss}")
accelerator.wait_for_everyone()
# 7
accelerator.wait_for_everyone()
accelerator.print("saving")
accelerator.unwrap_model(model).save_pretrained(
args.save_location,
is_main_process=accelerator.is_main_process,
save_function=accelerator.save,
state_dict=accelerator.get_state_dict(model),
)
def main():
args = parse_args()
training_function(args)
if __name__ == "__main__":
start = time()
main()
print(f"Total Execution Time: {time() - start} seconds")
```
I'd run it via
```bash
$ accelerate launch file.py --num_processes 1 # or 2 depending on situation
```
Here's an example of my `example/preprocessed_data.json` (not real data)
```json
[
{
"CONSULTID": "61110688",
"TAR_STATUS_NAME": "Closed",
"CODE_ID": "108",
"CODE_DESC": "Cancelled",
"STATUS": "02.Cancelled",
"YEAR_CREATED": "2023",
"SUBMIT_TO_RESPOND": "3.17",
"SUBMIT_TO_CLOSE": "30.06",
"SPECIALTY_NAME": "GASTROENTEROLOGY - ADULT",
"GENDER": "M",
"AGE_AT_CONSULT": "69",
"CREATED": "2023-01-03T12:15:16",
"DOB": "1953-05-01",
"PCP_NAME": "Armen Babaian",
"SR_NAME": "James Tabibian",
"ORG_NAME": "AAA - OVM Medi-Cal Ineligible Over 50",
"ORG_TYPE": "OTHER",
"DIAGNOSIS_CODE": "Z12.11",
"CATEGORY_NAME": "Medicine/Non-Surg",
"SUBCATEGORY_NAME": "GI",
"PCP_MESSAGE": "Hi James, I have a patient with chronic constipation who has failed medical management. What are your recommendations?",
"TQ_HEADER": "Clinical question",
"SR_MESSAGE": "Hi Armen, thanks for your message. I would recommend you referring your patient to a gastroenterologist for further evaluation and treatment. They may need additional tests, such as a colonoscopy or endoscopy, to determine the cause of their constipation. Additionally, I recommend you discuss with your patient about dietary and lifestyle changes that may help relieve their symptoms."
},
{
"CONSULTID": "61110688",
"TAR_STATUS_NAME": "Closed",
"CODE_ID": "108",
"CODE_DESC": "Cancelled",
"STATUS": "02.Cancelled",
"YEAR_CREATED": "2023",
"SUBMIT_TO_RESPOND": "3.17",
"SUBMIT_TO_CLOSE": "30.06",
"SPECIALTY_NAME": "GASTROENTEROLOGY - ADULT",
"GENDER": "M",
"AGE_AT_CONSULT": "69",
"CREATED": "2023-01-03T12:15:16",
"DOB": "1953-05-01",
"PCP_NAME": "Armen Babaian",
"SR_NAME": "James Tabibian",
"ORG_NAME": "AAA - OVM Medi-Cal Ineligible Over 50",
"ORG_TYPE": "OTHER",
"DIAGNOSIS_CODE": "Z12.11",
"CATEGORY_NAME": "Medicine/Non-Surg",
"SUBCATEGORY_NAME": "GI",
"PCP_MESSAGE": "Hi James, I have a patient with chronic constipation who has failed medical management. What are your recommendations?",
"TQ_HEADER": "Clinical question",
"SR_MESSAGE": "Hi Armen, thanks for your message. I would recommend you referring your patient to a gastroenterologist for further evaluation and treatment. They may need additional tests, such as a colonoscopy or endoscopy, to determine the cause of their constipation. Additionally, I recommend you discuss with your patient about dietary and lifestyle changes that may help relieve their symptoms."
},
...
]
```
### Expected behavior
. | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2458/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2457 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2457/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2457/comments | https://api.github.com/repos/huggingface/accelerate/issues/2457/events | https://github.com/huggingface/accelerate/issues/2457 | 2,137,777,193 | I_kwDOEmVyfs5_a-Qp | 2,457 | Communication/NCCL failures training FSDP in multi-node environment with SLURM | {
"login": "jpgard",
"id": 7265452,
"node_id": "MDQ6VXNlcjcyNjU0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7265452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jpgard",
"html_url": "https://github.com/jpgard",
"followers_url": "https://api.github.com/users/jpgard/followers",
"following_url": "https://api.github.com/users/jpgard/following{/other_user}",
"gists_url": "https://api.github.com/users/jpgard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jpgard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpgard/subscriptions",
"organizations_url": "https://api.github.com/users/jpgard/orgs",
"repos_url": "https://api.github.com/users/jpgard/repos",
"events_url": "https://api.github.com/users/jpgard/events{/privacy}",
"received_events_url": "https://api.github.com/users/jpgard/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"A couple of additional comments:\r\n* I've tried using torchrun as the launcher instead (with `export LAUNCHER=\"NCCL_DEBUG=INFO torchrun --nproc_per_node=$GPUS_PER_NODE --nnodes=$NNODES --node_rank=\\$SLURM_PROCID --master_addr=$MASTER_ADDR --master_port=$MASTER_PORT \"`) but it raises the same error\r\n* I am not 100% clear on whether NUM_PROCESSES should be set to 8 (the number of GPUs per node) or 16 (the total number of processes); I can't find this clearly documented in Accelerate either but may have missed something; the [docs](https://huggingface.co/docs/accelerate/package_reference/cli#accelerate-launch) say \"The total number of processes to be launched in parallel\" but I suppose this could be in parallel on one node, or on all nodes.",
"What kind of GPUs are these?",
"They are 40GB A100s, in nodes of 8"
] | 2024-02-16T02:47:45 | 2024-02-17T03:02:54 | null | NONE | null | null | null | ### System Info
```Shell
output of `accelerate env`: (note as shown below this prints the DEFAULT accelerate config and not the exact config being used for this job)
- `Accelerate` version: 0.27.2
- Platform: Linux-5.15.0-1037-aws-x86_64-with-glibc2.17
- Python version: 3.8.18
- Numpy version: 1.24.4
- PyTorch version (GPU?): 2.2.0+cu121 (False)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 123.82 GB
- `Accelerate` default config:
- compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 2
- main_process_ip:
- main_process_port: 1234
- rdzv_backend: static
- same_network: True
- main_training_function: main
- fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch': 'BACKWARD_PRE', 'fsdp_cpu_ram_efficient_loading': True, 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 'FULL_SHARD', 'fsdp_state_dict_type': 'FULL_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_transformer_layer_cls_to_wrap': 'LlamaDecoderLayer', 'fsdp_use_orig_params': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
I'm attempting to train a model with multi-node training, using SLURM scheduler. I am launching the job in 2 nodes with 8 GPUs each. My training script runs fine in a single-node environment with FSDP, and it *starts* fine in the multi-node setting -- until there is actual communication required between the nodes.
However, when the script gets to the parts that actually initialize multi-node training, it seems the processes are having issues communicating across nodes. I can see the logging output from all 16 processes, the data is loaded, etc. However, the script fails at `accelerator.prepare()`.
Specifically I see the stack trace containing these lines (complete stack trace is below):
```
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
```
Note that it is possible I have misconfigured the accelerate config, or the SLURM settings (tasks/node counts etc), but based on the example [here](https://gist.github.com/pacman100/1cb1f17b2f1b3139a63b764263e70b25) with corresponding FSDP config [here](https://github.com/pacman100/DHS-LLM-Workshop/blob/6093a2320543c2ac903a1fbb9b034ea714db43c9/personal_copilot/training/configs/fsdp_config.yaml#L4) this seems to be set up correctly to me.
Any thoughts would be appreciated, I've tried lots of different configurations and tinkering with the environment to make sure the versions of pytorch/NCCL/accelerate are all compatible as well.
## Contents of fsdp_config_base.yaml I am using:
```
compute_environment: LOCAL_MACHINE
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sync_module_states: true
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 8
mixed_precision: bf16
rdzv_backend: c10d
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
## Relevant chunks of the sbatch script I am launching the job with:
```
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=10
#SBATCH --partition=a40x
#SBATCH --gpus-per-node=8
#SBATCH --exclusive
#SBATCH --time=04-00:00:00
#SBATCH --account=nextgends
#SBATCH --chdir=/admin/home-jpgard/rtfm
#SBATCH --output=/admin/home-jpgard/rtfm/slurm-out/%j.out
#SBATCH --err=/admin/home-jpgard/rtfm/slurm-out/%j.err
#SBATCH --exclude=ip-10-0-201-106,ip-10-0-202-154
################# code block adapted from https://gist.github.com/pacman100/1cb1f17b2f1b3139a63b764263e70b25
set -x -e
# force crashing on nccl issues like hanging broadcast
export NCCL_ASYNC_ERROR_HANDLING=1
export NCCL_DEBUG=INFO
GPUS_PER_NODE=8
NNODES=$SLURM_NNODES
NUM_PROCESSES=$(expr $NNODES \* $GPUS_PER_NODE)
# A function to parse slurm's node notation when it uses 'bracketed' values for SLURM_JOB_NODELIST
# Function to parse and expand node list from SLURM_JOB_NODELIST
expand_nodes() {
# The input is something like "ip-10-0-231-[1,86]"
local nodelist=$1
# Replace '[' and ']' with space and split the string
local base=$(echo $nodelist | sed -E 's/\[([0-9]+),([0-9]+)\]/ \1 \2 /')
# Read into array
read -a parts <<< "$base"
# Check if we have three parts: prefix, start, end
if [ ${#parts[@]} -eq 3 ]; then
local prefix=${parts[0]}
local start=${parts[1]}
local end=${parts[2]}
# Generate sequence
for i in $(seq $start $end); do
echo "${prefix}${i}"
return # Return after first IP to mimic head node behavior
done
else
# If the format does not include a range, just echo the input
echo $nodelist
fi
}
# Extract the first node name from SLURM_JOB_NODELIST
# This assumes the format "node-list: ip-10-0-209-157,ip-10-0-231-1" and extracts the first node name
echo "SLURM_JOB_NODELIST is $SLURM_JOB_NODELIST"
node_name=$(echo $SLURM_JOB_NODELIST | sed 's/node-list: //' | cut -d, -f1)
# Now, resolve this node name to an IP address
# Using getent ahosts (You can also use nslookup if getent does not work as expected)
MASTER_ADDR=$(getent ahosts $node_name | head -n 1 | awk '{print $1}')
# Check if we got an IP
if [ ! -z "$MASTER_ADDR" ]; then
echo "Head node IP: $MASTER_ADDR"
else
echo "Failed to resolve head node IP address"
# Extract the first node name from SLURM_JOB_NODELIST and expand if needed
node_name=$(expand_nodes $SLURM_JOB_NODELIST)
# Now, resolve this node name to an IP address using getent ahosts
MASTER_ADDR=$(getent ahosts $node_name | head -n 1 | awk '{print $1}')
echo "Head node IP after parsing: $MASTER_ADDR"
fi
MASTER_PORT=6999
# OTHER LAUNCHERS CAN BE USED HERE
export LAUNCHER="/admin/home-jpgard/miniconda3/envs/rtfm/bin/accelerate launch \
--config_file /admin/home-jpgard/rtfm/fsdp_config_base.yaml \
--num_processes $NUM_PROCESSES \
--main_process_ip $MASTER_ADDR \
--num_machines $NNODES \
--main_process_port $MASTER_PORT \
--machine_rank \$SLURM_PROCID \
"
echo "SLURM_JOB_ID is ${SLURM_JOB_ID}"
echo 'activating conda environment'
source /admin/home-jpgard/.bashrc
source /admin/home-jpgard/miniconda3/etc/profile.d/conda.sh
which conda
conda activate rtfm
which python
export PROGRAM="\
scripts/train.py \
--more-args-here
--bf16 True \
--use_amp \
"
export CMD="$LAUNCHER $PROGRAM"
echo "about to run ${CMD}"
/opt/slurm/bin/srun --jobid $SLURM_JOBID /usr/bin/bash -c "$CMD"
```
## Full stack trace:
```
Traceback (most recent call last):
File "scripts/train.py", line 582, in <module>
main(
File "scripts/train.py", line 206, in main
model = accelerator.prepare(model)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1228, in prepare
result = tuple(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1229, in <genexpr>
self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1105, in _prepare_one
return self.prepare_model(obj, device_placement=device_placement)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1387, in prepare_model
model = FSDP(model, **kwargs)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 477, in __init__
_auto_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 101, in _auto_wrap
_recursive_wrap(**recursive_wrap_kwargs, **root_kwargs) # type: ignore[arg-type]
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 543, in _recursive_wrap
wrapped_child, num_wrapped_params = _recursive_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 543, in _recursive_wrap
wrapped_child, num_wrapped_params = _recursive_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 543, in _recursive_wrap
wrapped_child, num_wrapped_params = _recursive_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 561, in _recursive_wrap
return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 490, in _wrap
return wrapper_cls(module, **kwargs)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 503, in __init__
_init_param_handle_from_module(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/_init_utils.py", line 587, in _init_param_handle_from_module
_sync_module_params_and_buffers(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/_init_utils.py", line 1068, in _sync_module_params_and_buffers
_sync_params_and_buffers(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/utils.py", line 303, in _sync_params_and_buffers
dist._broadcast_coalesced(
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
socketStartConnect: Connect to fe80::a849:bbff:fe73:19bd%veth8299cd8<54785> failed : Software caused connection abort
Traceback (most recent call last):
File "scripts/train.py", line 582, in <module>
main(
File "scripts/train.py", line 206, in main
model = accelerator.prepare(model)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1228, in prepare
result = tuple(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1229, in <genexpr>
self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1105, in _prepare_one
return self.prepare_model(obj, device_placement=device_placement)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/accelerator.py", line 1387, in prepare_model
model = FSDP(model, **kwargs)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 477, in __init__
_auto_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 101, in _auto_wrap
_recursive_wrap(**recursive_wrap_kwargs, **root_kwargs) # type: ignore[arg-type]
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 543, in _recursive_wrap
wrapped_child, num_wrapped_params = _recursive_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 543, in _recursive_wrap
wrapped_child, num_wrapped_params = _recursive_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 543, in _recursive_wrap
wrapped_child, num_wrapped_params = _recursive_wrap(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 561, in _recursive_wrap
return _wrap(module, wrapper_cls, **kwargs), nonwrapped_numel
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/wrap.py", line 490, in _wrap
return wrapper_cls(module, **kwargs)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 503, in __init__
_init_param_handle_from_module(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/_init_utils.py", line 587, in _init_param_handle_from_module
_sync_module_params_and_buffers(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/fsdp/_init_utils.py", line 1068, in _sync_module_params_and_buffers
_sync_params_and_buffers(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/utils.py", line 303, in _sync_params_and_buffers
dist._broadcast_coalesced(
torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3
ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
Last error:
/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/utils/launch.py:192: FutureWarning: `fsdp_backward_prefetch_policy` is deprecated and will be removed in version 0.27.0 of 🤗 Accelerate. Use `fsdp_backward_prefetch` instead
warnings.warn(
[2024-02-16 02:42:00,874] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 4025500 closing signal SIGTERM
[2024-02-16 02:42:00,875] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 4025502 closing signal SIGTERM
[2024-02-16 02:42:00,875] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 4025503 closing signal SIGTERM
[2024-02-16 02:42:00,875] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 4025504 closing signal SIGTERM
[2024-02-16 02:42:00,876] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 4025505 closing signal SIGTERM
[2024-02-16 02:42:00,876] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 4025506 closing signal SIGTERM
[2024-02-16 02:42:00,876] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 4025507 closing signal SIGTERM
/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/utils/launch.py:192: FutureWarning: `fsdp_backward_prefetch_policy` is deprecated and will be removed in version 0.27.0 of 🤗 Accelerate. Use `fsdp_backward_prefetch` instead
warnings.warn(
[2024-02-16 02:42:00,885] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1267868 closing signal SIGTERM
[2024-02-16 02:42:00,885] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1267870 closing signal SIGTERM
[2024-02-16 02:42:00,886] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1267872 closing signal SIGTERM
[2024-02-16 02:42:00,886] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1267873 closing signal SIGTERM
[2024-02-16 02:42:00,886] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1267874 closing signal SIGTERM
[2024-02-16 02:42:00,886] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1267875 closing signal SIGTERM
[2024-02-16 02:42:00,886] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 1267876 closing signal SIGTERM
[2024-02-16 02:42:03,298] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 1267869) of binary: /admin/home-jpgard/miniconda3/envs/rtfm/bin/python
Traceback (most recent call last):
File "/admin/home-jpgard/miniconda3/envs/rtfm/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main
args.func(args)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/commands/launch.py", line 1010, in launch_command
multi_gpu_launcher(args)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/accelerate/commands/launch.py", line 672, in multi_gpu_launcher
distrib_run.run(args)
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/run.py", line 803, in run
elastic_launch(
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 135, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/admin/home-jpgard/miniconda3/envs/rtfm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 268, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
scripts/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-02-16_02:42:00
host : ip-10-0-209-157.us-west-2.compute.internal
rank : 9 (local_rank: 1)
exitcode : 1 (pid: 1267869)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
```
### Expected behavior
I expect training to work in the distributed setting just as it does in the single-node setting.
| {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2457/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2456 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2456/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2456/comments | https://api.github.com/repos/huggingface/accelerate/issues/2456/events | https://github.com/huggingface/accelerate/pull/2456 | 2,137,533,181 | PR_kwDOEmVyfs5nB8aa | 2,456 | [docs] Quicktour | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2456). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-15T22:10:07 | 2024-02-20T19:30:21 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2456",
"html_url": "https://github.com/huggingface/accelerate/pull/2456",
"diff_url": "https://github.com/huggingface/accelerate/pull/2456.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2456.patch",
"merged_at": null
} | This PR updates the Quicktour to focus more on the library's core offerings:
➕ organizes the Quicktour around the three main features: unified launcher, `Accelerator` class, and Big Model Inference
➕ new tutorials for Execution process and TPU training
➕ adds save Transformer models from Accelerator API page to same section with saving/loading models in Add Accelerate to your code tutorial
➕ adds `on_local_main_process`, `on_main_process`, `on_process`, and `on_local_process` from Accelerator API page to the Execution process docs
➖ removes the "Common modifications of the base case" section because all these different scenarios can be overwhelming for a new user and because it is not necessarily needed to help users start quickly. These sections will be integrated as a part of the tutorials, and I'll add links to redirect users there from the Quicktour. Only exception is the Launching distributed training from a notebook because there is already a tutorial for that.
- [x] move content from the "Common modifications of the base case" section into the Tutorials | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2456/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2456/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2455 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2455/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2455/comments | https://api.github.com/repos/huggingface/accelerate/issues/2455/events | https://github.com/huggingface/accelerate/issues/2455 | 2,137,469,070 | I_kwDOEmVyfs5_ZzCO | 2,455 | Help on Model Saving Errors with DeepSpeed and PEFT During Supervised Fine-Tuning | {
"login": "johncordeiro",
"id": 6073075,
"node_id": "MDQ6VXNlcjYwNzMwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6073075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johncordeiro",
"html_url": "https://github.com/johncordeiro",
"followers_url": "https://api.github.com/users/johncordeiro/followers",
"following_url": "https://api.github.com/users/johncordeiro/following{/other_user}",
"gists_url": "https://api.github.com/users/johncordeiro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johncordeiro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johncordeiro/subscriptions",
"organizations_url": "https://api.github.com/users/johncordeiro/orgs",
"repos_url": "https://api.github.com/users/johncordeiro/repos",
"events_url": "https://api.github.com/users/johncordeiro/events{/privacy}",
"received_events_url": "https://api.github.com/users/johncordeiro/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"We need:\r\n\r\n1. Your accelerate version (`accelerate env`)\r\n2. How are you trying to save your model?",
"1. Your accelerate version (accelerate env)\r\n\r\n```\r\n- `Accelerate` version: 0.27.0.dev0\r\n- Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Numpy version: 1.24.1\r\n- PyTorch version (GPU?): 2.1.0+cu118 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- System RAM: 944.45 GB\r\n- GPU type: NVIDIA A100 80GB PCIe\r\n```\r\n\r\n2. How are you trying to save your model?\r\nI'm doing a `trainer.save_model(dir_model_name)` and trainer is the SFTTrainer instance used to train the model."
] | 2024-02-15T21:20:22 | 2024-02-16T19:02:01 | null | NONE | null | null | null | I'm conducting Supervised Fine-Tuning on mistral-instruct-v0.2 for a zero-shot classification task. I managed to implement it using a Pipeline with Parameter-Efficient Fine-Tuning (PEFT) featuring Lora. To handle a higher batch size, I configured Accelerate alongside DeepSpeed. This allowed me to initiate the training process with a simple command: `accelerate launch train.py`. The training took 2 days on 4xH100 GPUs to complete.
Although the training process appeared successful, I encountered an error when attempting to save the model:
```
Missing key(s) in state_dict: "base_model.model.model.embed_tokens.weight", "base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight" ... (continues)
```
It appears that DeepSpeed encountered an issue when loading the best checkpoint into the base model for final output generation, detecting discrepancies in the layer configurations, which I cannot explain. I have saved all checkpoints from this training session. Therefore, I have two questions for those with similar experiences:
1) Why did this happen?
2) How can I use the best checkpoint generated by DeepSpeed to update the base model and create a new one?
More information:
Accelerate config:
```yaml
compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:
deepspeed_config_file: /app/transformers-training/deepspeed.json
zero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
Deepspeed config:
```json
{
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": 4,
"fp16": {
"enabled": true
},
"scheduler": {
"type": "WarmupCosineLR",
"params": {
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"stage3_gather_16bit_weights_on_model_save": false
}
}
``` | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2455/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2454 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2454/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2454/comments | https://api.github.com/repos/huggingface/accelerate/issues/2454/events | https://github.com/huggingface/accelerate/issues/2454 | 2,137,273,271 | I_kwDOEmVyfs5_ZDO3 | 2,454 | [BUG] Unexpected GPU memory consumption when using transformers PEFT in Deepspeed Zero3 | {
"login": "alekseymalakhov11",
"id": 131314005,
"node_id": "U_kgDOB9OxVQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131314005?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alekseymalakhov11",
"html_url": "https://github.com/alekseymalakhov11",
"followers_url": "https://api.github.com/users/alekseymalakhov11/followers",
"following_url": "https://api.github.com/users/alekseymalakhov11/following{/other_user}",
"gists_url": "https://api.github.com/users/alekseymalakhov11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alekseymalakhov11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alekseymalakhov11/subscriptions",
"organizations_url": "https://api.github.com/users/alekseymalakhov11/orgs",
"repos_url": "https://api.github.com/users/alekseymalakhov11/repos",
"events_url": "https://api.github.com/users/alekseymalakhov11/events{/privacy}",
"received_events_url": "https://api.github.com/users/alekseymalakhov11/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Note that the same issue has been opened on [PEFT](https://github.com/huggingface/peft/issues/1470). We should keep discussion in one thread to avoid duplicate work.",
"And in [transformers](https://github.com/huggingface/transformers/issues/29047) as well (in the future, please do not do this and just open in one. All of these were opened within minutes of each other). ",
"As younes has already started responding there, let's keep it in the transformers one please"
] | 2024-02-15T19:27:27 | 2024-02-16T13:08:49 | null | NONE | null | null | null | ### System Info
```Shell
transformers = "4.35.0"
peft = "0.7.1"
torch = ">=2.0.0"
accelerate = "^0.24.1"
deepspeed = "^0.9.5"
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
### Description
Llama30B with Lora adapters cannot fit into 8 x A100 (80GB).
### Demonstration of Problem and Experiment Setups
I will illustrate this issue using various experiment setups on smaller models:
1. 7b+lora+stage 3
![image](https://github.com/huggingface/peft/assets/131314005/5d30fd07-2b4f-4da2-a2fb-3b9434fbb6c8)
2. 7b+stage 3
![image](https://github.com/huggingface/peft/assets/131314005/b9453c69-a576-40a4-8e9c-0a4bd47bb4ab)
3. 7b+lora+stage 2
![image](https://github.com/huggingface/peft/assets/131314005/4a754f4d-d89b-4bc9-bf19-bcd02710a2a7)
4. 7b + stage 2
![image](https://github.com/huggingface/peft/assets/131314005/5e5bfa69-99d7-4918-9b0b-9c432ef02bef)
All other parameters remain consistent in the experiments below.
### Expected behavior
### Suspected Cause
The possible reason for this issue might be that Zero3 does not partition non-trainable weights across GPUs. The basis for this assumption is:
- The memory consumption is consistent with predicted values when Lora is not used.
- When training the model with both Zero2 and Zero3 using Lora, I observe nearly the same memory consumption.
- A [code examination](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/stage3.py#L318C14-L318C58) of the Zero Runtime sources also suggests this could be the case.
### Expected behavior
Training the model with Zero3 while using Lora should consume significantly less memory than Zero2 with Lora.
We also opened an [issue in Deepspeed](https://github.com/microsoft/DeepSpeed/issues/5109), but no one has assisted us. Additionally, you might have more experience with PEFT and Deepspeed integration in the Transformers trainer. | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2454/timeline | null | null | false |
https://api.github.com/repos/huggingface/accelerate/issues/2453 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2453/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2453/comments | https://api.github.com/repos/huggingface/accelerate/issues/2453/events | https://github.com/huggingface/accelerate/pull/2453 | 2,137,088,462 | PR_kwDOEmVyfs5nAb4j | 2,453 | Use grad-accum on TPU | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4217311860,
"node_id": "LA_kwDOEmVyfs77XxJ0",
"url": "https://api.github.com/repos/huggingface/accelerate/labels/TPU",
"name": "TPU",
"color": "CAA9C3",
"default": false,
"description": "Bug or feature on TPU platforms"
}
] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2453). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-15T17:37:10 | 2024-02-15T18:51:08 | null | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2453",
"html_url": "https://github.com/huggingface/accelerate/pull/2453",
"diff_url": "https://github.com/huggingface/accelerate/pull/2453.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2453.patch",
"merged_at": null
} | # What does this PR do?
Solves https://github.com/huggingface/transformers/issues/29042 by allowing `xm.mark_step` to occur if we're running on XLA specifically.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2453/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2452 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2452/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2452/comments | https://api.github.com/repos/huggingface/accelerate/issues/2452/events | https://github.com/huggingface/accelerate/pull/2452 | 2,136,817,686 | PR_kwDOEmVyfs5m_hRP | 2,452 | Check for None | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2452). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-15T15:26:13 | 2024-02-15T15:38:55 | 2024-02-15T15:38:54 | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2452",
"html_url": "https://github.com/huggingface/accelerate/pull/2452",
"diff_url": "https://github.com/huggingface/accelerate/pull/2452.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2452.patch",
"merged_at": "2024-02-15T15:38:54"
} | # What does this PR do?
The CV models test showed a failure when `args` is defined but `kwargs` are none (why only them, unsure as gpt2 didn't face this, but I digress). This check makes sure that the padding is only checked for if the value is not `None` (otherwise we get an error about how we can't run it w/o a tensor-like item)
Fixes # (issue)
Failing test on main
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2452/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2451 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2451/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2451/comments | https://api.github.com/repos/huggingface/accelerate/issues/2451/events | https://github.com/huggingface/accelerate/pull/2451 | 2,136,654,122 | PR_kwDOEmVyfs5m-858 | 2,451 | Add pre-commit configuration | {
"login": "akx",
"id": 58669,
"node_id": "MDQ6VXNlcjU4NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akx",
"html_url": "https://github.com/akx",
"followers_url": "https://api.github.com/users/akx/followers",
"following_url": "https://api.github.com/users/akx/following{/other_user}",
"gists_url": "https://api.github.com/users/akx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akx/subscriptions",
"organizations_url": "https://api.github.com/users/akx/orgs",
"repos_url": "https://api.github.com/users/akx/repos",
"events_url": "https://api.github.com/users/akx/events{/privacy}",
"received_events_url": "https://api.github.com/users/akx/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2451). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-15T14:21:29 | 2024-02-16T10:47:30 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2451",
"html_url": "https://github.com/huggingface/accelerate/pull/2451",
"diff_url": "https://github.com/huggingface/accelerate/pull/2451.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2451.patch",
"merged_at": null
} | # What does this PR do?
This PR adds a [pre-commit](https://pre-commit.com/) configuration, so developers may opt in to running `ruff` and `ruff-format` as a pre-commit hook.
Should reduce the need for "fix lint" commits and `check_code_quality` CI failures.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- Added to contributing readme.
- [ ] Did you write any new necessary tests?
- None required.
## Who can review?
cc @muellerzr @BenjaminBossan | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2451/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2450 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2450/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2450/comments | https://api.github.com/repos/huggingface/accelerate/issues/2450/events | https://github.com/huggingface/accelerate/pull/2450 | 2,135,823,563 | PR_kwDOEmVyfs5m8EsH | 2,450 | Context manager fixes | {
"login": "akx",
"id": 58669,
"node_id": "MDQ6VXNlcjU4NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akx",
"html_url": "https://github.com/akx",
"followers_url": "https://api.github.com/users/akx/followers",
"following_url": "https://api.github.com/users/akx/following{/other_user}",
"gists_url": "https://api.github.com/users/akx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akx/subscriptions",
"organizations_url": "https://api.github.com/users/akx/orgs",
"repos_url": "https://api.github.com/users/akx/repos",
"events_url": "https://api.github.com/users/akx/events{/privacy}",
"received_events_url": "https://api.github.com/users/akx/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2450). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The test failures are interesting:\r\n> torch._dynamo.exc.InvalidBackend: Invalid backend: 'aot_ts_nvfuser', see `torch._dynamo.list_backends()` for available backends.\r\n\r\nThat string is set in an unrelated `test_torch_dynamo_plugin` test (which admonishes about this exact thing)... Looks like this uncovered a more interesting bug. :)\r\n\r\n**EDIT:** No, I was just being a dummy – when restoring the environment, we need to clear out any additions too..."
] | 2024-02-15T07:24:36 | 2024-02-19T11:16:22 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2450",
"html_url": "https://github.com/huggingface/accelerate/pull/2450",
"diff_url": "https://github.com/huggingface/accelerate/pull/2450.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2450.patch",
"merged_at": null
} | # What does this PR do?
* Configures Ruff to prevent `os.getenv` and `os.setenv`; there should be one way to manage the env, and `os.environ` was used more.
* `clear_environment()` didn't actually clear environment variables, it just assigned a new `os.environ` proxy dictionary.
* The environment management context managers wouldn't clean up after themselves if the inner block raised.
* There was another suspect place where cleanup code after `yield` is not wrapped; marked that as a TODO (@muellerzr seems to have touched that function last))
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- No user-facing changes other than the CMs being more "do-what-I-mean"
- [x] Did you write any new necessary tests?
- Sure did!
## Who can review?
cc @muellerzr @BenjaminBossan | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2450/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2450/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2449 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2449/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2449/comments | https://api.github.com/repos/huggingface/accelerate/issues/2449/events | https://github.com/huggingface/accelerate/pull/2449 | 2,135,793,673 | PR_kwDOEmVyfs5m7-FC | 2,449 | Remove unnecessary `env=os.environ.copy()`s | {
"login": "akx",
"id": 58669,
"node_id": "MDQ6VXNlcjU4NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akx",
"html_url": "https://github.com/akx",
"followers_url": "https://api.github.com/users/akx/followers",
"following_url": "https://api.github.com/users/akx/following{/other_user}",
"gists_url": "https://api.github.com/users/akx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akx/subscriptions",
"organizations_url": "https://api.github.com/users/akx/orgs",
"repos_url": "https://api.github.com/users/akx/repos",
"events_url": "https://api.github.com/users/akx/events{/privacy}",
"received_events_url": "https://api.github.com/users/akx/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-02-15T07:05:01 | 2024-02-15T07:05:01 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2449",
"html_url": "https://github.com/huggingface/accelerate/pull/2449",
"diff_url": "https://github.com/huggingface/accelerate/pull/2449.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2449.patch",
"merged_at": null
} | # What does this PR do?
Subprocess execution calls had a copy-pasted `env=os.environ.copy()` tacked onto them.
When `env=None`, quoth the docs for `subprocess.run`:
> If env is not `None`, it must be a mapping that defines the environment variables for the new process; these are used **instead of the default behavior of inheriting the current process’ environment.**
, emphasis mine.
Unless someone expects `execute_subprocess_async()` to modify the `env` passed in, these are all unnecessary copy-paste. If `execute_subprocess_async()` is expected to modify it, then we can add ```if env is None: env = os.environ.copy()` within it.
Sibling of #2446.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- No user-facing changes.
- [ ] Did you write any new necessary tests?
- Shouldn't be necessary.
## Who can review?
cc @muellerzr @BenjaminBossan | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2449/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2448 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2448/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2448/comments | https://api.github.com/repos/huggingface/accelerate/issues/2448/events | https://github.com/huggingface/accelerate/pull/2448 | 2,135,370,775 | PR_kwDOEmVyfs5m6jQs | 2,448 | Update accelerator.py | {
"login": "tginart",
"id": 11379648,
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tginart",
"html_url": "https://github.com/tginart",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https://api.github.com/users/tginart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tginart/subscriptions",
"organizations_url": "https://api.github.com/users/tginart/orgs",
"repos_url": "https://api.github.com/users/tginart/repos",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"received_events_url": "https://api.github.com/users/tginart/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for working on this. Do you have a code snippet that reproduces the error? It would be useful to see and possibly add it as a test as well.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/accelerate/pr_2448). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-02-14T23:31:17 | 2024-02-15T14:37:25 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/accelerate/pulls/2448",
"html_url": "https://github.com/huggingface/accelerate/pull/2448",
"diff_url": "https://github.com/huggingface/accelerate/pull/2448.diff",
"patch_url": "https://github.com/huggingface/accelerate/pull/2448.patch",
"merged_at": null
} | TL;DR -- Fix error behavior for models initialized with 'cuda' device map
# What does this PR do?
If a model is initialized with a the 'cuda' device map, the prepare_model routine will silently fail due to current_device.index returning a None value. Instead, it seems the correct error message would be to catch this in the if statement above.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/accelerate/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/accelerate/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/accelerate/tree/main/docs#writing-documentation---specification).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
- Big modeling: @SunMarc
- Fully-Sharded Data Parallism: @pacman100
- DeepSpeed: @pacman100
- Command Line Interface: @muellerzr
- Documentation: @muellerzr
- Core parts of the library: @muellerzr @BenjaminBossan
- Maintained examples: @muellerzr or @pacman100
--> | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2448/timeline | null | null | true |
https://api.github.com/repos/huggingface/accelerate/issues/2447 | https://api.github.com/repos/huggingface/accelerate | https://api.github.com/repos/huggingface/accelerate/issues/2447/labels{/name} | https://api.github.com/repos/huggingface/accelerate/issues/2447/comments | https://api.github.com/repos/huggingface/accelerate/issues/2447/events | https://github.com/huggingface/accelerate/issues/2447 | 2,134,853,642 | I_kwDOEmVyfs5_P0gK | 2,447 | Does the recent news about speeding up pytorch model loading apply to huggingface transformers APIs? | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"That won't work. You should use exactly his code, or use `low_cpu_mem_usage=True` (and no context manager needed)"
] | 2024-02-14T17:42:36 | 2024-02-14T17:48:19 | null | NONE | null | null | null | https://twitter.com/RisingSayak/status/1756634311493890559
Can this apply to using transfomers APIs? Take the below for example:
```python
model = DistilBertForSequenceClassification.from_pretrained(model_path, local_files_only=True).to(
torch.device("cpu")
)
```
Can this be modified to load faster? | {
"url": "https://api.github.com/repos/huggingface/accelerate/issues/2447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/accelerate/issues/2447/timeline | null | null | false |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 33