Serial Number
int64
1
6k
Issue Number
int64
75.6k
112k
Title
stringlengths
3
357
Labels
stringlengths
3
241
Body
stringlengths
9
74.5k
Comments
int64
0
867
5,901
76,288
Multi-GPU distributed training reports errors
oncall: distributed, triaged, module: c10d
### 🐛 Describe the bug ``` 2022-04-24 09:26:36,241 - INFO - task : ['pedestrian', 'traffic_cone'], loss: 1.5811, hm_loss: 0.9833, loc_loss: 2.3910, loc_loss_elem: ['0.1535', '0.1580', '0.2172', '0.2156', '0.2590', '0.1522', '0.2787', '0.3207', '0.5800', '0.5356'], num_positive: 144.2000 2022-04-24 09:27:23,734 - INFO - finding looplift candidates 2022-04-24 09:27:23,734 - INFO - finding looplift candidates 2022-04-24 09:27:23,734 - INFO - finding looplift candidates 2022-04-24 09:27:23,734 - INFO - finding looplift candidates 2022-04-24 09:27:23,734 - INFO - finding looplift candidates 2022-04-24 09:27:23,734 - INFO - finding looplift candidates 2022-04-24 09:27:23,734 - INFO - finding looplift candidates 2022-04-24 09:27:23,734 - INFO - finding looplift candidates [E ProcessGroupNCCL.cpp:719] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=583940, OpType=_ALLGATHER_BASE, Timeout(ms)=1800000) ran for 1800257 milliseconds before timing out. [E ProcessGroupNCCL.cpp:406] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down. terminate called after throwing an instance of 'std::runtime_error' what(): [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=583940, OpType=_ALLGATHER_BASE, Timeout(ms)=1800000) ran for 1800257 milliseconds before timing out. /public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects `--local_rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions FutureWarning, WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 58033 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 58028) of binary: /public/home/u212040344/.conda/envs/centerpoint/bin/python Traceback (most recent call last): File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module> main() File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/site-packages/torch/distributed/run.py", line 718, in run )(*cmd_args) File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/public/home/u212040344/.conda/envs/centerpoint/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 247, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ====================================================== ./tools/train.py FAILED ------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2022-04-24_09:59:14 host : node191 rank : 0 (local_rank: 0) exitcode : -6 (pid: 58028) error_file: <N/A> traceback : Signal 6 (SIGABRT) received by PID 58028 ====================================================== ``` ### Versions When I was training on two GPAUs A100s, when I ended the first epoch, I got stuck here after INFO loaded, at this time I saw that one of the GPU usage rates has been 100, and one of the usage rates has been 0. Finally the following error was output, how do I fix it, thanks! *enviroment: pytorch1.11 cuda11.3* cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
3
5,902
76,287
torch.elastic fails to shutdown despite crashed processes
oncall: distributed, module: elastic
### 🐛 Describe the bug one of the nodes in the DDP training crashed, which torch.elastic detected and killed most of the workers, but those of the failing node which continued hanging until manually killed. It should have killed all workers so that SLURM could have exited the job which would have restarted anew recovering from the problem, but it was just hanging in an infinite state of logging for 8 hours before I noticed the problem and manually killed the slurm job: ``` WARNING:torch.distributed.elastic.rendezvous.dynamic_rendezvous:The node 'jean-zay-iam37-ib0_187281_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. WARNING:torch.distributed.elastic.rendezvous.dynamic_rendezvous:The node 'jean-zay-iam37-ib0_187281_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. WARNING:torch.distributed.elastic.rendezvous.dynamic_rendezvous:The node 'jean-zay-iam37-ib0_187281_0' has failed to send a keep-alive heartbeat to the rendezvous 'none' due to an error of type RendezvousConnectionError. ``` Here is the full log file: [log.4.txt](https://github.com/pytorch/pytorch/files/8549986/log.4.txt) As can be seen from the end of the log file these are the 6 processes that didn't get killed until I manually signalled SLURM to kill the job: ``` srun: Job step aborted: Waiting up to 62 seconds for job step to finish. WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 187365 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 187366 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 187367 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 187369 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 187370 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 187371 closing signal SIGTERM ``` The launcher part was: ``` export LAUNCHER="python -u -m torch.distributed.run \ --nproc_per_node $GPUS_PER_NODE \ --nnodes $NNODES \ --rdzv_endpoint $MASTER_ADDR:$MASTER_PORT \ --rdzv_backend c10d \ --max_restarts 0 \ --tee 3 \ " ``` Thank you. @cbalioglu ### Versions PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Red Hat Enterprise Linux release 8.4 (Ootpa) (x86_64) GCC version: (GCC) 8.4.1 20200928 (Red Hat 8.4.1-1) Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.28 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-4.18.0-305.40.2.el8_4.x86_64-x86_64-with-glibc2.17 Is CUDA available: False CUDA runtime version: 11.4.152 GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.22.2 [pip3] torch==1.11.0+cu115 [pip3] torchaudio==0.11.0+cu115 [pip3] torchvision==0.12.0+cu115 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.22.2 pypi_0 pypi [conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch-nightly [conda] torch 1.11.0+cu115 pypi_0 pypi [conda] torchaudio 0.11.0+cu115 pypi_0 pypi [conda] torchvision 0.12.0+cu115 pypi_0 pypi cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
31
5,903
76,282
`torch.cuda.amp.GradScaler` may skip parameter synchronization required by post localSGD optimizer
oncall: distributed, module: amp (automated mixed precision)
### 🐛 Describe the bug The following two designs may not work very well together. - When mixed precision is enabled by `torch.cuda.amp.GradScaler`, as noted in this [line](https://github.com/pytorch/pytorch/blob/bbc3cc6718d841665ab2677477d458ed97b75609/torch/cuda/amp/grad_scaler.py#L68) and this [line](https://github.com/pytorch/pytorch/blob/bbc3cc6718d841665ab2677477d458ed97b75609/torch/cuda/amp/grad_scaler.py#L86), if gradients contain infs/NaNs, then `optimizer.step()` can be skipped. - Some [distributed optimizers](https://github.com/pytorch/pytorch/tree/77f23d64607142c0cba47f5148ebc1959c9de366/torch/distributed/optim) often run certain communications besides `step()` of the local optimizer. A prominent example is [post_localSGD_optimizer](https://github.com/pytorch/pytorch/blob/77f23d64607142c0cba47f5148ebc1959c9de366/torch/distributed/optim/post_localSGD_optimizer.py#L78), which runs parameter averaging after `step()` of local optimizer. Since parameter/model averaging paradigm (e.g., local SGD, hierarchical SGD, SlowMo) does not synchronize parameters at every iteration, even if gradients have infs/NaNs and the local trainable parameters are not updated at the current iteration, it is still better to average the parameters than have such averaging skipped. Otherwise the parameters might become too stale (e.g., the last synchronization could be `2*T` iterations ago, where `T` denotes the averaging period). IMO, this might even cause the loss of NaN might occur for some cases. Furthermore, this behavior of silently skipping `optimizer.step()` can also be a risk for some other (future) distributed optimizers if certain states (e.g., optimizer state, buffer) still need to be updated. For example, the internal step counter [here](https://github.com/pytorch/pytorch/blob/77f23d64607142c0cba47f5148ebc1959c9de366/torch/distributed/optim/functional_rmsprop.py#L91) can be less than the # of steps in a training loop, and same as the step counter in this [RFC](https://github.com/pytorch/pytorch/issues/74556). Additionally, not sure if any other optimizer wrapper may need to run certain communications even if no local optimizer step. One relevant feature request is https://github.com/pytorch/pytorch/issues/67590, which proposes to signal that the scaler has been stepped and `optimizer.step()` has been actually called. _Note that this issue may not be very serious for some cases in practice, because [such step skipping is expected to occur rarely after the first few iterations](https://github.com/pytorch/pytorch/blob/bbc3cc6718d841665ab2677477d458ed97b75609/torch/cuda/amp/grad_scaler.py#L93-L95)._ ### Versions PyTorch 1.11 cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @mcarilli @ptrblck
0
5,904
76,274
[ONNX] About custom operator convert PreciseRoIPooling to ONNX
module: onnx, triaged, onnx-needs-info
## Issue description I want to speed up the model in the project, but the model contains the prroi pooling library compiled by the third party. Through consulting the documents, I learned that custom operators can be added for conversion, but onnx is still cann't export after simple implementation. How can I convert prroi pooling to onnx?(PreciseRoIPooling with dynamically linked C compiler library) The following is the implementation details of the custom operator for reference. ## Code example ``` import torch import torch.autograd as ag __all__ = ['prroi_pool2d'] _prroi_pooling = None def _import_prroi_pooling(): global _prroi_pooling if _prroi_pooling is None: try: from os.path import join as pjoin, dirname from torch.utils.cpp_extension import load as load_extension root_dir = pjoin(dirname(__file__), 'src') _prroi_pooling = load_extension( '_prroi_pooling', [pjoin(root_dir, 'prroi_pooling_gpu.c'), pjoin(root_dir, 'prroi_pooling_gpu_impl.cu')], verbose=True ) except ImportError: raise ImportError('Can not compile Precise RoI Pooling library.') return _prroi_pooling class PrRoIPool2DFunction(ag.Function): @staticmethod def forward(ctx, features, rois, pooled_height, pooled_width, spatial_scale): _prroi_pooling = _import_prroi_pooling() assert 'FloatTensor' in features.type() and 'FloatTensor' in rois.type(), \ 'Precise RoI Pooling only takes float input, got {} for features and {} for rois.'.format(features.type(), rois.type()) pooled_height = int(pooled_height) pooled_width = int(pooled_width) spatial_scale = float(spatial_scale) features = features.contiguous() rois = rois.contiguous() params = (pooled_height, pooled_width, spatial_scale) if features.is_cuda: output = _prroi_pooling.prroi_pooling_forward_cuda(features, rois, *params) ctx.params = params # everything here is contiguous. ctx.save_for_backward(features, rois, output) else: raise NotImplementedError('Precise RoI Pooling only supports GPU (cuda) implememtations.') return output @staticmethod def backward(ctx, grad_output): _prroi_pooling = _import_prroi_pooling() features, rois, output = ctx.saved_tensors grad_input = grad_coor = None if features.requires_grad: grad_output = grad_output.contiguous() grad_input = _prroi_pooling.prroi_pooling_backward_cuda(features, rois, output, grad_output, *ctx.params) if rois.requires_grad: grad_output = grad_output.contiguous() grad_coor = _prroi_pooling.prroi_pooling_coor_backward_cuda(features, rois, output, grad_output, *ctx.params) return grad_input, grad_coor, None, None, None @staticmethod def symbolic(g, features, rois, pooled_height, pooled_width, spatial_scale): return g.op("::PrRoIPool2DFunction", features, rois, pooled_height, pooled_width, spatial_scale) prroi_pool2d = PrRoIPool2DFunction.apply ``` Then I encountered this problem again. What caused this? ![image](https://user-images.githubusercontent.com/45596935/164953373-8e69d84f-088d-47bb-a4d3-c87b1cf4672a.png) ## System Info PyTorch version: 1.10.1 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 Libc version: glibc-2.17 Python version: 3.6.13 |Anaconda, Inc.| (default, Jun 4 2021, 14:25:59) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.4.0-109-generic-x86_64-with-debian-bullseye-sid Is CUDA available: True CUDA runtime version: 11.2.152 - Versions of any other relevant libraries: [pip3] numpy==1.19.2 [pip3] torch==1.10.1 [pip3] torchaudio==0.10.1 [pip3] torchfile==0.1.0 [pip3] torchvision==0.11.2 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2020.2 256 [conda] mkl-service 2.3.0 py36he8ac12f_0 [conda] mkl_fft 1.3.0 py36h54f3939_0 [conda] mkl_random 1.1.1 py36h0573a6f_0 [conda] numpy 1.19.2 py36h54aff64_0 [conda] numpy-base 1.19.2 py36hfa32c7d_0 [conda] pytorch 1.10.1 py3.6_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 0.10.1 py36_cu113 pytorch [conda] torchfile 0.1.0 pypi_0 pypi [conda] torchvision 0.11.2 py36_cu113 pytorch
7
5,905
76,244
Initial integration of ZenDNN as backend into PyTorch
feature, module: convolution, triaged
### 🚀 The feature, motivation and pitch ZenDNN library optimizes basic math kernels used by Deep learning algorithms on AMD EPYC CPUs. As part of integration of ZenDNN into PyTorch we integrated a single op (convolution). We have support for other ops and optimization ready to be submitted. This PR covers following - Added ZenDNN as third-party library - ZenDNN is a submodule for pytorch - modified pytorch build infra to build with ZenDNN when USE_ZENDNN is set to 1 - pytorch build reverts to default behavior if USE_ZENDNN environmental variable is set to zero or unset - Added AOCL-blis as dependency of ZenDNN - AOCL-blis is a submodule for pytorch - AOCL-blis is built if USE_ZENDNN is set to 1 - Added ZenDNN as backend to pytorch - added support for user level control to disable or enable zendnn backend - Added following changes - added zendnn layout and ZendnnCPU dispatch key. - tensor conversion between dense and zendnn layouts - zendnn convolution as a new ConvBackend - when built with zendnn (USE_ZENDNN=1) and gradient calculations are disabled (torch.no_grad) convolution is computed using ZenDNN library) - backward propagation is not supported. - support is only added for fp32 data type - Added few unit tests for pytorch + zendnn URLs of dependent Library: ZenDNN github: https://github.com/amd/ZenDNN AMD-BLIS github: https://github.com/amd/blis Corresponding Pull Request: https://github.com/pytorch/pytorch/pull/76242 ### Alternatives _No response_ ### Additional context _No response_
8
5,906
76,232
Observing a strange behavior - Row parallelism
module: numerical-stability, module: nn, triaged, module: correctness (silent)
### 🐛 Describe the bug Hi there! I am trying to manually reproduce this scheme, which corresponds to a model row parallelism paradigm: ![Screenshot from 2022-04-22 15-55-48](https://user-images.githubusercontent.com/49240599/164728838-3f78585b-1018-4366-a499-95133fdeaa89.png) Therefore I have written a small snippet of code to reproduce the scheme above, the assertion passes for small ```batch_size```, ```hidden_dim``` and ```output_dim``` values (for example: ```batch_size = 1 \ hidden_dim = 4 \ output_dim = 2```) but strangely the assertion does not pass anymore for large values (specifically 10, 40, 20). What could explain this behavior? Code snippet (tried on google colab): ``` import torch import torch.nn as nn import torch.nn.functional as F batch_size = 10 hidden_dim = 40 output_dim = 20 sliced_dim = hidden_dim//2 dummy_mlp = nn.Linear(hidden_dim, output_dim, bias=False) parameters = torch.nn.Parameter(torch.randn(output_dim, hidden_dim)) dummy_mlp.weight = parameters dummy_input = torch.randn(batch_size, hidden_dim) sliced_input_1, sliced_input_2 = torch.split(dummy_input, sliced_dim, dim=-1) sliced_output_1 = F.linear(sliced_input_1, dummy_mlp.weight[:, :sliced_dim]) sliced_output_2 = F.linear(sliced_input_2, dummy_mlp.weight[:, sliced_dim:]) final_output = dummy_mlp(dummy_input) assert torch.equal(final_output, sliced_output_1 + sliced_output_2) ``` Thank you very much in advance for your help!! ### Versions Collecting environment information... PyTorch version: 1.10.0+cu111 Is debug build: False CUDA used to build PyTorch: 11.1 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final) CMake version: version 3.12.0 Libc version: glibc-2.26 Python version: 3.7.13 (default, Mar 16 2022, 17:37:17) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic Is CUDA available: True CUDA runtime version: 11.1.105 GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 460.32.03 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 /usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.5 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.5 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.6 [pip3] torch==1.10.0+cu111 [pip3] torchaudio==0.10.0+cu111 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.1+cu111 [conda] Could not collect cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
5
5,907
76,225
RuntimeError: bucket_count == per_bucket_sizes.size()INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646755853042/work/torch/csrc/distributed/c10d/reducer.cpp":980, please report a bug to PyTorch.
oncall: distributed, triaged
### 🐛 Describe the bug ```bash bucket_count == per_bucket_sizes.size()INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646755853042/work/torch/csrc/distributed/c10d/reducer.cpp":980, please report a bug to PyTorch. ``` ### Versions ```bash python collect_env.py Collecting environment information... PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.2.152 GPU models and configuration: GPU 0: Tesla V100-PCIE-32GB GPU 1: Tesla V100-PCIE-32GB GPU 2: Tesla V100-PCIE-32GB GPU 3: Tesla V100-PCIE-32GB Nvidia driver version: 510.60.02 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.1.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.1.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.11.0 [pip3] torchaudio==0.11.0 [pip3] torchvision==0.12.0 [conda] blas 1.0 mkl defaults [conda] cudatoolkit 10.2.89 hfd86e86_1 defaults [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.3.0 h06a4308_520 defaults [conda] mkl-service 2.4.0 py38h7f8727e_0 defaults [conda] mkl_fft 1.3.1 py38hd3c417c_0 defaults [conda] mkl_random 1.2.2 py38h51133e4_0 defaults [conda] numpy 1.21.2 py38h20f2e39_0 defaults [conda] numpy-base 1.21.2 py38h79a1101_0 defaults [conda] pytorch 1.11.0 py3.8_cuda10.2_cudnn7.6.5_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 0.11.0 py38_cu102 pytorch [conda] torchvision 0.12.0 py38_cu102 pytorch ``` cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
2
5,908
76,206
Update NCCL to 2.12
oncall: distributed, module: build, triaged
### 🐛 Describe the bug The current PyTorch links NCCL v2.10.3-1 by default, which includes a critical bug that breaks the `CollNet` algorithm in distributed training: https://github.com/NVIDIA/nccl/issues/556. This bug has been fixed since v2.12.7-1: https://github.com/NVIDIA/nccl/commit/4ec992fab7d90691ad43bee6f7f6c34aa219934c. Is there any timeline to update the NCCL version that PyTorch links to? If not I'd be happy to submit a PR. ### Versions N/A cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @malfet @seemethere
4
5,909
76,191
[Feature request] Exclusive prefix sum, `torch.cumsum(input, dim=0, exclusive=True)`
high priority, module: cuda, triaged, enhancement, module: reductions
### 🚀 The feature, motivation and pitch Currently `torch.cumsum` only supports inclusive prefix sum. However, the exclusive prefix sum is also needed in some cases. Proposed to add a `exclusive` parameter to the `torch.cumsum` function ```py input = torch.tensor([1, 1, 1]) torch.cumsum(input, dim=0, exclusive=True) # [1, 2, 3] torch.cumsum(input, dim=0, exclusive=False) # [0, 1, 2] ``` Related issue from PyTorch Forums: - [PyTorch equivalent of exclusive cumsum? - PyTorch Forums](https://discuss.pytorch.org/t/pytorch-equivalent-of-exclusive-cumsum/107259) CUDA implementation from CUB: - [CUB: cub::DeviceScan Struct Reference](https://nvlabs.github.io/cub/structcub_1_1_device_scan.html#a02b2d2e98f89f80813460f6a6ea1692b) ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @gchanan @zou3519 @ptrblck @ngimel
5
5,910
76,186
fx: cannot find module <built-in method matmul> when using apex.amp
triaged, module: fx
### 🐛 Describe the bug I camp up with the following issue when using `torch.fx` and [apex.amp](https://nvidia.github.io/apex/amp.html) together. If a model has `torch.matul` operations and uses `apex.amp (opt_level=O1)` for mixed precision optimization, then fx will fail to recompile the model graph and will report the following error: ```text File "/xxxxxxxxxxxxx/dist-packages/torch/fx/node.py", line 40, in _find_module_of_method raise RuntimeError(f'cannot find module for {orig_method}, {name}') RuntimeError: cannot find module for <built-in method matmul of type object at 0x7f6c5fd9fe40> ``` A simple toy module with `torch.matmul` can reproduce the error. ```python import torch from torch import fx class ToyMod(torch.nn.Module): def __init__(self, in_features=768, out_features=768): super().__init__() self.linear = torch.nn.Linear(in_features=in_features, out_features=out_features) def forward(self, X, other): result = torch.matmul(self.linear(input=X), other) return result def test(): from apex import amp mod = ToyMod().cuda() optimizer = torch.optim.SGD(mod.parameters(), lr=1e-3) # Allow Amp to perform casts as required by the opt_level model, optimizer = amp.initialize(mod, optimizer, opt_level="O1") graph : fx.Graph = fx.Tracer().trace(model) for node in graph.nodes: # do something pass graph.lint() return fx.GraphModule(model, graph) ``` By checking [fx/node.py](https://github.com/pytorch/pytorch/blob/v1.10.0/torch/fx/node.py), it seems `torch.matmul` is missing in `torch`. I added `print(torch.matmul)` at https://github.com/pytorch/pytorch/blob/v1.10.0/torch/fx/node.py#L37, and got ```text torch.matmul: <function _VariableFunctionsClass.matmul at 0x7f377820cb70> ``` The object address of matmul in `torch` has been changed. Any suggestions about this issue are appreciated. ### Versions PyTorch version: 1.10.0 CUDA used to build PyTorch: 11.3 OS: Debian GNU/Linux 10 (buster) (x86_64) GCC version: (Debian 8.3.0-6) 8.3.0 Python version: 3.7.3 Is CUDA available: True CUDA runtime version: 11.3.109 [pip3] numpy==1.21.5 [pip3] torch==1.10.0 [pip3] torchaudio==0.10.0+cu113 [pip3] torchvision==0.11.1+cu113 apex: Version 0.1 cc @ezyang @SherlockNoMad
1
5,911
76,185
Ensure custom Function are correct in double backward setting
module: autograd, triaged, enhancement, actionable
### Problem statement Today, if intermediary Tensors are saved during the forward and used during the backward, the double backward will be silently wrong because their contribution will be ignored. A small repro is: ```python import torch from torch.autograd import Function, gradcheck, gradgradcheck class MyFn(Function): @staticmethod def forward(ctx, inp): temp = inp * 2 out = temp * 3 ctx.temp = temp ctx.inp = inp return out @staticmethod def backward(ctx, gO): return gO * torch.full_like(gO, 3) * ctx.temp / ctx.inp inp = torch.rand(10, dtype=torch.float64, requires_grad=True) gradcheck(MyFn.apply, inp) gradgradcheck(MyFn.apply, inp) ``` ### Proposed solution Note that I am not sure this is the best way to go about this, we might want to do something else in practice? Option 1) Add a `ErrorNode` as the grad_fn for everything that gets saved on the ctx and via save_for_backward if it does not already have a grad_fn. Since the backward runs in no grad_mode when you're not doing double backward, it will just be ignored for most users. But in the double backward case, it will be attached to the created graph. And so if the user tries to backward through that particular Tensor, we can provide a nice error message. If this is a false positive for some users (they want to ignore that intermediary from the double backward computation), then they can just "detach()" it to recover the previous behavior. Option 2) Add these error nodes on the fly during the backward function only if grad_mode is enabled. This avoid mutating the object when the user does `ctx.foo = foo` which is better? And only the accessed one is changed? Unfortunately, we cannot do this out of place as some users rely on the fact that `ctx.foo is foo`. Option 3) ?? cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
2
5,912
76,181
Allow `torch.fx` tracing on TorchScript models
oncall: jit, module: fx
### 🚀 The feature, motivation and pitch It would be really nice if we could use `torch.fx.symbolic_trace` on a TorchScript model. We use TorchScript extensively to save models since it's a hermetically sealed artifact that no longer requires a Python dependency. We'd like to be able to then `torch.fx.symbolic_trace` these TorchScript models so we can use the Graph FX quantization API on pre-trained models -- right now, if you try to trace a TorchScript model you get an error like: ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [47], in <module> 1 original = nn.Linear(1, 1) 2 scripted = torch.jit.script(original) ----> 3 torch.fx.symbolic_trace(scripted) File /opt/conda/envs/ctrldev/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py:907, in symbolic_trace(root, concrete_args, enable_cpatching) 858 """ 859 Symbolic tracing API 860 (...) 904 GraphModule: a Module created from the recorded operations from ``root``. 905 """ 906 tracer = Tracer(enable_cpatching=enable_cpatching) --> 907 graph = tracer.trace(root, concrete_args) 908 name = root.__class__.__name__ if isinstance(root, torch.nn.Module) else root.__name__ 909 return GraphModule(tracer.root, graph, name) File /opt/conda/envs/ctrldev/lib/python3.8/site-packages/torch/fx/_symbolic_trace.py:559, in Tracer.trace(self, root, concrete_args) 557 if isinstance(root, torch.nn.Module): 558 self.root = root --> 559 fn = type(root).forward 560 self.submodule_paths = {mod: name for name, mod in root.named_modules()} 561 else: File /opt/conda/envs/ctrldev/lib/python3.8/site-packages/torch/jit/_script.py:298, in _CachedForward.__get__(self, obj, cls) 297 def __get__(self, obj, cls): --> 298 return self.__getattr__("forward") AttributeError: '_CachedForward' object has no attribute '__getattr__' ``` This would be super convenient because it would decouple the code that produced the original pre-trained model and the code used for quantization. ### Alternatives Ultimately, what we want is to be able to quantize a model without having access to the original code that generated the model (so no `pickle`, for instance). AFAICT, TorchScript is the best way to serialize a model without needing the original Python dependency, but if there's an alternative serialization format that plays nice with `torch.fx` then I think we'd be happy to use that instead (e.g. is there a way to serialize a `torch.fx.Graph` in such a hermetically sealed way?). ### Additional context _No response_
6
5,913
76,176
Indexing assignment can have no effect on CUDA with deterministic algorithms
high priority, triaged, module: advanced indexing
### 🐛 Describe the bug On CUDA with `use_deterministic_algorithms(True)`, advanced indexing assignment has no effect on target tensors with more than one effective dimension when the source tensor has one dimension. ## To reproduce ```python import torch torch.use_deterministic_algorithms(True) device = 'cuda' # Snippet shows the case for shape (4, 2) # Can also be (n, 1, 2), (n, 1, 1, 3), etc. # While (n, 1), (n, 1, 1), etc. work as intended n = 4 # First dim size d = 2 # Number of dims m = 2 # Last dim size a = torch.zeros((n, *(1, )*(d-2), m), device=device) # Target b = torch.ones(n, device=device) # Source print('Full shape: ', a.shape) # Assign to all rows at subsequent 0-indices (e.g. first column) idx_tuple = (torch.arange(n), *(torch.tensor(0), )*(d-1)) print('Assigning to indices:', idx_tuple) a[idx_tuple] = b print('Sum of first column: ', a[idx_tuple].sum().item()) print('Sum of source data: ', b.sum().item()) ``` ## Output ``` Full shape: torch.Size([4, 2]) Assigning to indices: (tensor([0, 1, 2, 3]), tensor(0)) Sum of first column: 0.0 Sum of source data: 4.0 ``` ## Expected behaviour Output should be the same as on CUDA with `use_deterministic_algorithms(False)` and CPU: ``` Full shape: torch.Size([4, 2]) Assigning to indices: (tensor([0, 1, 2, 3]), tensor(0)) Sum of first column: 4.0 Sum of source data: 4.0 ``` ## Additional context When `a[idx_tuple] = b` is replaced with an explicit call to `a.index_put_(idx_tuple, b)`, the bug does not present itself, despite, I would assume, deferring to the same underlying calls. For completeness, the bug also does not happen with simpler indexing, e.g. when `a[idx_tuple] = b` is replaced with: ```python for j, i in enumerate(idx_tuple[0]): a[(i, ) + idx_tuple[1:]] = b[j] ``` Interestingly, if the source tensor has more than one effective dimension as well, the bug also does not occur. To confirm: ```python torch.use_deterministic_algorithms(True) device = 'cuda' n = 4 d = 2 m = 2 a = torch.zeros((n, *(1, )*(d-2), m), device=device) b = torch.ones((n, m), device=device) print('Full shape: ', a.shape) # Copy (n, m) tensor with 0-indices in between idx_tuple = (torch.arange(n), *(torch.tensor(0), )*(d-2)) print('Assigning to indices:', idx_tuple) a[idx_tuple] = b print('Sum of first column: ', a[idx_tuple].sum().item()) print('Sum of source data: ', b.sum().item()) ``` Note that changing the source shape from (n, m) to (n, 1) and attempting to do the same results in a known `INTERNAL_ASSERT_FAILED`. See comment 2 below. ### Versions PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Laptop GPU Nvidia driver version: 510.60.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.11.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.21.2 py38h20f2e39_0 [conda] numpy-base 1.21.2 py38h79a1101_0 [conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch cc @ezyang @gchanan @zou3519 @mruberry @kurtamohler
7
5,914
76,175
Many dispatch keys do not print to string correctly
triaged, module: dispatch
### 🐛 Describe the bug I've been spending a lot of time with the "dispatch key implementation not found" error and I noticed that there are a lot of `UNKNOWN_TENSOR_TYPE_ID` in the output: ``` NotImplementedError: Could not run 'aten::_foreach_minimum.List' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_foreach_minimum.List' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID ``` cc @bdhirsh ### Versions master
1
5,915
76,166
at::real and at::imag as methods
triaged, module: complex
### 🚀 The feature, motivation and pitch It'd make formulae easier to read, as you could read them left-to-right. E.g. ```c++ t.diagonal(0, -2, -1).real().mul_(0.5); ``` rather than ```c++ at::real(t.diagonal(0, -2, -1)).mul_(0.5); ``` I could submit a PR if people are fine with this. ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved
1
5,916
76,160
test_wishart_log_prob fails locally for me
module: distributions, module: tests, triaged
### 🐛 Describe the bug ``` ====================================================================== ERROR: test_wishart_log_prob (__main__.TestDistributions) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/users/ezyang/pytorch-tmp/test/distributions/test_distributions.py", line 2309, in test_wishart_log_prob expected = ref_dist.logpdf(x.transpose(0, 2).numpy()) File "/home/ezyang/local/pytorch-tmp-env/lib/python3.9/site-packages/scipy/stats/_multivariate.py", line 2258, in logpdf out = self._dist._logpdf(x, self.dim, self.df, self.scale, File "/home/ezyang/local/pytorch-tmp-env/lib/python3.9/site-packages/scipy/stats/_multivariate.py", line 1851, in _logpdf _, log_det_x[i] = self._cholesky_logdet(x[:, :, i]) File "/home/ezyang/local/pytorch-tmp-env/lib/python3.9/site-packages/scipy/stats/_multivariate.py", line 2221, in _cholesky_logdet c_decomp = scipy.linalg.cholesky(scale, lower=True) File "/home/ezyang/local/pytorch-tmp-env/lib/python3.9/site-packages/scipy/linalg/_decomp_cholesky.py", line 88, in cholesky c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True, File "/home/ezyang/local/pytorch-tmp-env/lib/python3.9/site-packages/scipy/linalg/_decomp_cholesky.py", line 37, in _cholesky raise LinAlgError("%d-th leading minor of the array is not positive " numpy.linalg.LinAlgError: 3-th leading minor of the array is not positive definite ---------------------------------------------------------------------- ``` I have scipy 1.8.0 installed ### Versions master cc @fritzo @neerajprad @alicanb @nikitaved @mruberry @jianyuh @pearu @walterddr @IvanYashchuk @xwang233 @Lezcano
4
5,917
76,155
Replace `RuntimeError` by custom exception for unsupported ONNX operators during export
module: onnx, triaged, onnx-triaged, topic: improvements
### 🐛 Describe the bug During `torch.onnx.export()`, `RuntimeError` [is](https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_helper.py#L360-L387) raised when the ONNX converter need to express that a specific operator is not supported. `RuntimeError` can also be raised due to a legitimate user (within training script) error or an implementation bug within a symbolic operator implementation. Up to today, both scenarios are mixed, and it is up to the user to distinguish whether this is a PyTorch (and report a bug) or their own (and fix it). Possibly a PyTorch bug would be filed, causing unnecessary noise to all. After https://github.com/pytorch/pytorch/pull/74759, when `operator_export_type=ONNX_ATEN_FALLBACK` and a `RuntimeError` is raised for whatever reason, ONNX exporter will interpret `RuntimeError` as an unsupported operator and will emit ATen operator (instead of raising a possible implementation bug). ### Versions All pytorch versions
7
5,918
76,138
Coverage test is only checking packages and not all submodules
module: docs, triaged, module: python frontend
The check for `ispkg` should be removed from this check: https://github.com/pytorch/pytorch/blob/81722f66306aef05334c27bb2a64137d8fa6493a/docs/source/conf.py#L469 The allowlist need to be updated to reflect that. cc @brianjo @mruberry
0
5,919
76,128
test_license_for_wheel always fails on my local dev copy
module: tests, triaged
### 🐛 Describe the bug ``` python test/test_license.py sE ====================================================================== ERROR: test_license_for_wheel (__main__.TestLicense) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/users/ezyang/pytorch-tmp/test/test_license.py", line 27, in test_license_for_wheel create_bundled('third_party', current) File "/data/users/ezyang/pytorch-tmp/third_party/build_bundled.py", line 41, in create_bundled collected = collect_license(d) File "/data/users/ezyang/pytorch-tmp/third_party/build_bundled.py", line 19, in collect_license raise ValueError('could not identify license file ' ValueError: could not identify license file for third_party/cudnn_frontend ---------------------------------------------------------------------- Ran 2 tests in 0.168s FAILED (errors=1, skipped=1) ``` this seems to pass on master CI so I guess it's a problem with my local dev env... would be nice to make this test more robust ### Versions master cc @mruberry
6
5,920
76,127
run_test.py option to write out failed tests
module: tests, triaged, better-engineering
### 🐛 Describe the bug Sometimes I need to make a wide ranging change to PyTorch and run the entire test suite, and I only expect a few tests scattered here and there to fail. It would be nice to get a list of failing tests, which I could then pipe into the runner, and only execute those tests to see if my fix worked. ### Versions master cc @mruberry
0
5,921
76,113
Deprecation warning from SequentialLR
high priority, triaged, module: deprecation, module: LrScheduler
### 🐛 Describe the bug This is probably from sub-scheduler when SequentialLR calls them with epoch arg. ``` /usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:154: UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) ``` Related code: - https://github.com/pytorch/pytorch/blob/29a45f00097664d12921c8d13e387bee4b835665/torch/optim/lr_scheduler.py#L154 - https://github.com/pytorch/pytorch/blob/29a45f00097664d12921c8d13e387bee4b835665/torch/optim/lr_scheduler.py#L632 Comment should also be fixed: - https://github.com/pytorch/pytorch/blob/29a45f00097664d12921c8d13e387bee4b835665/torch/optim/lr_scheduler.py#L1276 For compatibility with 3rd-party sub-scheduler who's still relying on epoch arg, you may: - Add a member switch to temporarily suppress this warnings when called directly by SequentialLR, and revert it at the end of step(). - Or whitelist pytorch's own types with warnings in SequentialLR to use non-epoch signature. You may also need to check the type of `self` before printing that warning, to keep it quiet on derived types. ### Versions Commit 29a45f00097664d12921c8d13e387bee4b835665 cc @ezyang @gchanan @zou3519
1
5,922
76,108
RuntimeError: [enforce fail at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\caffe2\serialize\inline_container.cc:300]
module: serialization, triaged
### 🐛 Describe the bug ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py:380, in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 379 with _open_zipfile_writer(opened_file) as opened_zipfile: --> 380 _save(obj, opened_zipfile, pickle_module, pickle_protocol) 381 return File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py:604, in _save(obj, zip_file, pickle_module, pickle_protocol) 603 num_bytes = storage.nbytes() --> 604 zip_file.write_record(name, storage.data_ptr(), num_bytes) RuntimeError: Could not allocate bytes object! During handling of the above exception, another exception occurred: RuntimeError Traceback (most recent call last) Input In [4], in <cell line: 7>() 4 if epoch < 115: return 0.001 5 if epoch < 145: return 0.0001 ----> 7 yolo.train(net, train_iter, test_iter, 145, lr=lr, momentum=0.9, weight_decay=5e-4, device='cuda', accum_batch_num=4, save_path='./model') File ~\Desktop\yolo-v1-pytorch\yolo.py:375, in train(net, train_iter, test_iter, num_epochs, lr, momentum, weight_decay, device, accum_batch_num, save_path, load, load_epoch, pretrained) 372 animator.add(epoch + 1, (None, test_l)) 374 # save model --> 375 torch.save(net.state_dict(), os.path.join(save_path, f'./{time.time_ns()}-epoch-{epoch}.pth')) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py:381, in save(obj, f, pickle_module, pickle_protocol, _use_new_zipfile_serialization) 379 with _open_zipfile_writer(opened_file) as opened_zipfile: 380 _save(obj, opened_zipfile, pickle_module, pickle_protocol) --> 381 return 382 _legacy_save(obj, opened_file, pickle_module, pickle_protocol) File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py:260, in _open_zipfile_writer_buffer.__exit__(self, *args) 259 def __exit__(self, *args) -> None: --> 260 self.file_like.write_end_of_file() 261 self.buffer.flush() RuntimeError: [enforce fail at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\caffe2\serialize\inline_container.cc:300] . unexpected pos 151863680 vs 151863576 ``` ### Versions ``` PyTorch version: 1.11.0+cu113 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 专业版 GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.19041-SP0 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 SUPER Nvidia driver version: 511.65 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.5 [pip3] torch==1.11.0+cu113 [pip3] torchaudio==0.11.0+cu113 [pip3] torchvision==0.12.0+cu113 [conda] Could not collect ``` cc @mruberry
15
5,923
76,106
Allow any operation that takes a Storage to also take a contiguous Tensor instead
feature, triaged, module: python frontend
### 🐛 Describe the bug We don't like storages, right? So we should make it possible to do stuff in Python userland without having to explicitly refer to storage. Allowing all operations on storages to instead accept contiguous tensors is a good start; e.g., `set_(..., storage_offset, size, strides)` should take a contiguous tensor. One fly in the ointment is that even if a tensor is contiguous, it could refer to the inside of a storage (non-zero storage offset). This should probably also be disallowed when using the API in this way, or accounted for (in the case of `set_`, you can just implicitly adjust `storage_offset` and it will work out) ### Versions master
0
5,924
76,105
Failed to build on Ubuntu 18.04 due to bad MPI linker flags
module: build, triaged
### 🐛 Describe the bug On Ubuntu 18.04, we got an invalid MPI flags in CMakeCache.txt: ``` root@0e084f65becb:/tmp/scratch/pytorch/build# cmake --build . -- -j1 [1/2617] : && /usr/bin/g++-8 -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/x86_64-linux-gnu/openmpi/lib;-L/usr//lib;-lmpi_cxx;-lmpi c10/benchmark/CMakeFiles/c10_intrusive_ptr_benchmark.dir/intrusive_ptr_benchmark.cpp.o -o bin/c10_intrusive_ptr_benchmark -Wl,-rpath,/tmp/scratch/pytorch/build/lib:/usr/local/lib: lib/libc10.so lib/libbenchmark.a /usr/local/lib/libglog.so.0.4.0 /usr/local/lib/libgflags.so.2.2.2 /usr/local/lib/libunwind.so -lpthread -pthread /usr/lib/x86_64-linux-gnu/librt.so && : FAILED: bin/c10_intrusive_ptr_benchmark : && /usr/bin/g++-8 -fdebug-prefix-map='/tmp/scratch'='/usr/local/src' -g1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow -O3 -DNDEBUG -DNDEBUG -rdynamic -L/usr/lib/x86_64-linux-gnu/openmpi/lib;-L/usr//lib;-lmpi_cxx;-lmpi c10/benchmark/CMakeFiles/c10_intrusive_ptr_benchmark.dir/intrusive_ptr_benchmark.cpp.o -o bin/c10_intrusive_ptr_benchmark -Wl,-rpath,/tmp/scratch/pytorch/build/lib:/usr/local/lib: lib/libc10.so lib/libbenchmark.a /usr/local/lib/libglog.so.0.4.0 /usr/local/lib/libgflags.so.2.2.2 /usr/local/lib/libunwind.so -lpthread -pthread /usr/lib/x86_64-linux-gnu/librt.so && : g++-8: fatal error: no input files compilation terminated. /bin/sh: 1: -L/usr//lib: not found /bin/sh: 1: -lmpi_cxx: not found /bin/sh: 1: -lmpi: not found ninja: build stopped: subcommand failed. ``` CMakeCache.txt: ``` 1396 //These flags will be placed after all flags passed to mpiexec. 1397 MPIEXEC_POSTFLAGS:STRING= 1398 1399 //These flags will be directly before the executable that is being 1400 // run by mpiexec. 1401 MPIEXEC_PREFLAGS:STRING= 1402 1403 //MPI CXX additional include directories 1404 MPI_CXX_ADDITIONAL_INCLUDE_DIRS:STRING=/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi 1405 1406 //MPI compiler for CXX 1407 MPI_CXX_COMPILER:FILEPATH=MPI_CXX_COMPILER-NOTFOUND 1408 1409 //MPI CXX compiler wrapper include directories 1410 MPI_CXX_COMPILER_INCLUDE_DIRS:STRING= 1411 1412 //MPI CXX compilation definitions 1413 MPI_CXX_COMPILE_DEFINITIONS:STRING= 1414 1415 //MPI CXX compilation options 1416 MPI_CXX_COMPILE_OPTIONS:STRING=-pthread;-I/usr/lib/x86_64-linux-gnu/openmpi/include;-I/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi 1417 1418 //Path to a file. 1419 MPI_CXX_HEADER_DIR:PATH=/usr/lib/x86_64-linux-gnu/openmpi/include 1420 1421 //MPI CXX include directories 1422 MPI_CXX_INCLUDE_PATH:STRING=/usr/lib/x86_64-linux-gnu/openmpi/include;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi 1423 1424 //MPI CXX libraries to link against 1425 MPI_CXX_LIB_NAMES:STRING=mpi_cxx;mpi 1426 1427 //MPI CXX linker flags 1428 MPI_CXX_LINK_FLAGS:STRING=-L/usr/lib/x86_64-linux-gnu/openmpi/lib;-L/usr//lib;-lmpi_cxx;-lmpi 1429 1430 //If true, the MPI-2 C++ bindings are disabled using definitions. 1431 MPI_CXX_SKIP_MPICXX:BOOL=OFF 1432 1433 //MPI C additional include directories 1434 MPI_C_ADDITIONAL_INCLUDE_DIRS:STRING=/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi 1435 1436 //MPI compiler for C 1437 MPI_C_COMPILER:FILEPATH=MPI_C_COMPILER-NOTFOUND 1438 1439 //MPI C compiler wrapper include directories 1440 MPI_C_COMPILER_INCLUDE_DIRS:STRING= 1441 1442 //MPI C compilation definitions 1443 MPI_C_COMPILE_DEFINITIONS:STRING= 1444 1445 //MPI C compilation options 1446 MPI_C_COMPILE_OPTIONS:STRING=-pthread;-I/usr/lib/x86_64-linux-gnu/openmpi/include;-I/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi 1447 1448 //Path to a file. 1449 MPI_C_HEADER_DIR:PATH=/usr/lib/x86_64-linux-gnu/openmpi/include 1450 1451 //MPI C include directories 1452 MPI_C_INCLUDE_PATH:STRING=/usr/lib/x86_64-linux-gnu/openmpi/include;/usr/lib/x86_64-linux-gnu/openmpi/include/openmpi 1453 1454 //MPI C libraries to link against 1455 MPI_C_LIB_NAMES:STRING=mpi 1456 1457 //MPI C linker flags 1458 MPI_C_LINK_FLAGS:STRING=-L/usr/lib/x86_64-linux-gnu/openmpi/lib;-L/usr//lib;-lmpi 1459 1460 //Location of the mpi library for MPI 1461 MPI_mpi_LIBRARY:FILEPATH=/usr/lib/libmpi.so 1462 1463 //Location of the mpi_cxx library for MPI 1464 MPI_mpi_cxx_LIBRARY:FILEPATH=/usr/lib/x86_64-linux-gnu/libmpi_cxx.so ``` This is specific to the combination of Ubuntu 18.04 and CMake 3.22/3.23 so I cannot tell what's the root cause. CMake 3.21 works fine. CentOS 7/Debian 11 + CMake 3.23 also works fine. When it works, `MPI_CXX_LINK_FLAGS = -pthread`. ### Versions - Ubuntu 18.04 - gcc-8 - python 3.6 - cmake 3.23 - OpenMPI 4 - Commit 025cd69a86b4cce951c1fccb5b7160d8174e5ea9, the last one supporting stock python 3.6 on Ubuntu 18.04. cc @malfet @seemethere
7
5,925
76,103
Some test failed when running in parallel.
oncall: distributed
### 🐛 Describe the bug Running `ctest -j36 --output-on-failure` on Ubuntu 18.04 ``` 96/141 Test #71: workspace_test ............................ Passed 0.73 sec Start 10: c10_Bitset_test 97/141 Test #12: c10_ConstexprCrc_test ..................... Passed 0.01 sec Start 9: c10_Array_test 98/141 Test #15: c10_Metaprogramming_test .................. Passed 0.01 sec Start 8: c10_SizesAndStrides_test 99/141 Test #16: c10_SmallVectorTest ....................... Passed 0.02 sec Start 7: c10_InlineStreamGuard_test 100/141 Test #135: FileStoreTest ............................. Passed 0.66 sec Start 6: c10_InlineDeviceGuard_test 101/141 Test #73: fixed_divisor_test ........................ Passed 0.75 sec Start 5: c10_StreamGuard_test 102/141 Test #11: c10_C++17_test ............................ Passed 0.02 sec Start 4: c10_DispatchKeySet_test 103/141 Test #10: c10_Bitset_test ........................... Passed 0.02 sec Start 3: c10_Device_test 104/141 Test #9: c10_Array_test ............................ Passed 0.01 sec Start 2: c10_DeviceGuard_test 105/141 Test #8: c10_SizesAndStrides_test .................. Passed 0.01 sec Start 1: c10_CompileTimeFunctionPointer_test 106/141 Test #65: parallel_net_test ......................... Passed 7.54 sec 107/141 Test #63: operator_schema_test ...................... Passed 0.79 sec 108/141 Test #7: c10_InlineStreamGuard_test ................ Passed 0.01 sec 109/141 Test #6: c10_InlineDeviceGuard_test ................ Passed 0.01 sec 110/141 Test #5: c10_StreamGuard_test ...................... Passed 0.01 sec 111/141 Test #4: c10_DispatchKeySet_test ................... Passed 0.01 sec 112/141 Test #2: c10_DeviceGuard_test ...................... Passed 0.01 sec 113/141 Test #1: c10_CompileTimeFunctionPointer_test ....... Passed 0.00 sec 114/141 Test #3: c10_Device_test ........................... Passed 0.01 sec 115/141 Test #14: c10_LeftRight_test ........................ Passed 0.16 sec 116/141 Test #103: utility_ops_test .......................... Passed 0.78 sec 117/141 Test #137: HashStoreTest ............................. Passed 0.81 sec 118/141 Test #72: inline_container_test ..................... Passed 0.83 sec 119/141 Test #139: ProcessGroupGlooAsyncTest ................. Passed 0.81 sec 120/141 Test #37: c10_cuda_CUDATest ......................... Passed 0.94 sec 121/141 Test #134: utility_ops_gpu_test ...................... Passed 9.77 sec 122/141 Test #64: operator_test ............................. Passed 9.82 sec 123/141 Test #121: event_gpu_test ............................ Passed 9.93 sec 124/141 Test #114: nnpack_test ...............................***Failed 10.09 sec 125/141 Test #132: reshape_op_gpu_test ....................... Passed 10.27 sec 126/141 Test #128: elementwise_op_gpu_test ................... Passed 10.73 sec 127/141 Test #131: operator_fallback_gpu_test ................ Passed 10.76 sec 128/141 Test #123: operator_gpu_test ......................... Passed 10.77 sec 129/141 Test #129: generate_proposals_op_gpu_test ............ Passed 10.79 sec 130/141 Test #66: plan_executor_test ........................ Passed 10.81 sec 131/141 Test #127: batch_permutation_op_gpu_test ............. Passed 10.91 sec 132/141 Test #133: roi_align_op_gpu_test ..................... Passed 11.13 sec 133/141 Test #126: batch_matmul_op_gpu_test .................. Passed 11.35 sec 134/141 Test #124: math_gpu_test ............................. Passed 11.35 sec 135/141 Test #119: blob_gpu_test ............................. Passed 12.27 sec 136/141 Test #141: ProcessGroupNCCLErrorsTest ................ Passed 14.95 sec 137/141 Test #120: context_gpu_test .......................... Passed 16.02 sec 138/141 Test #51: blob_test ................................. Passed 22.06 sec 139/141 Test #130: generate_proposals_op_util_nms_gpu_test ... Passed 22.63 sec 140/141 Test #75: fatal_signal_asan_no_sig_test ............. Passed 29.40 sec 141/141 Test #140: ProcessGroupNCCLTest ......................Bus error***Exception: 45.71 sec 98% tests passed, 3 tests failed out of 141 Total Test time (real) = 46.09 sec The following tests FAILED: 50 - op_registration_test (Failed) 114 - nnpack_test (Failed) 140 - ProcessGroupNCCLTest (Bus error) ``` Log of bus error: ``` 141/141 Test #140: ProcessGroupNCCLTest ......................Bus error***Exception: 39.07 sec Running main() from ../third_party/googletest/googletest/src/gtest_main.cc [==========] Running 11 tests from 1 test suite. [----------] Global test environment set-up. [----------] 11 tests from ProcessGroupNCCLTest [ RUN ] ProcessGroupNCCLTest.testAllreduce WARNING: Logging before InitGoogleLogging() is written to STDERR I0420 13:18:49.426566 233385 ProcessGroupNCCLTest.cpp:585] Multi-node world size: 1 rank: 0 I0420 13:19:01.975904 233385 ProcessGroupNCCL.cpp:632] [Rank 0] will wait up to 1800000 msec for NCCL health check to complete. I0420 13:19:03.251540 233385 ProcessGroupNCCL.cpp:583] [Rank 0] ProcessGroupNCCL initialized with following options: NCCL_ASYNC_ERROR_HANDLING: 0 NCCL_DESYNC_DEBUG: 0 NCCL_BLOCKING_WAIT: 0 TIMEOUT(ms): 1800000 USE_HIGH_PRIORITY_STREAM: 0 NCCL_DEBUG: UNSET I0420 13:19:03.251551 238663 ProcessGroupNCCL.cpp:728] [Rank 0] NCCL watchdog thread started! ``` - op_registration_test is probably https://github.com/pytorch/pytorch/issues/70160 - nnpack_test occasionally failed. - ProcessGroupNCCLTest consistently failed, but running the executable directly is fine. Sequential ctest is fine except for op reg. Regarding NCCL, this is a dual-GPU system with Volta + Pascal and running inside docker. ### Versions Commit 025cd69a86b4cce951c1fccb5b7160d8174e5ea9, the last one supporting stock python 3.6 on Ubuntu 18.04. cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
14
5,926
76,082
Eliminate uses of deprecated `FindCUDA.cmake`
module: build, module: cuda, triaged, better-engineering
`find_package(CUDA)` is the legacy way of using CMake to build CUDA code and was deprecated in CMake 3.10. #62445 transitioned over to `enable_language(CUDA)` which gives first class support for `CUDA` as a language in CMake. However, `find_package(CUDA)` is still being used to locate the CUDA toolkit libraries. This is potentially dangerous because we have two different mechanisms searching for the CUDA installation, each with their own configuration variables. So, it would be best to stop using `FindCUDA.cmake` entirely. The modern CMake equivalent for library targets is `find_package(CUDAToolkit)` which requires CMake 3.17, so we can't assume it's present yet. However, it might also be possible to vendor `FindCUDAToolkit.cmake` and use it anyway. There are also dependencies in `third_party` which still use `find_package(CUDA)` - [ ] [gloo](https://github.com/facebookincubator/gloo/blob/c22a5cfba94edf8ea4f53a174d38aa0c629d070f/cmake/Cuda.cmake#L133) - [ ] [tensorpipe](https://github.com/pytorch/tensorpipe/blob/52791a2fd214b2a9dc5759d36725909c1daa7f2e/tensorpipe/CMakeLists.txt#L237) - [ ] [kineto](https://github.com/pytorch/kineto/blob/b2b48c00c6e5bd8e807e2231adb229db6a1d1c22/libkineto/CMakeLists.txt#L181) cc @malfet @seemethere @ngimel
6
5,927
76,071
HIPFFT_EXEC_FAILED when using AMD GPU run FFT
module: cuda, triaged, module: fft
### 🐛 Describe the bug When running the following function in my code, it reports an error: The code is: ``` import torch import torch.nn as nn import torch.fft import torch.cuda ... def fft_loss(patterson, electron_density): patterson = patterson[0,0,:,:,:] electron_density = electron_density[0,0,:,:,:] f1 = torch.fft.fftn(electron_density) f2 = torch.fft.fftn(torch.roll(torch.flip(electron_density, [0, 1, 2]), shifts=(1, 1, 1), dims=(0, 1, 2))) f3 = torch.mul(f1,f2) #f3 = torch.mul(f1.real,f1.real) f4 = torch.fft.ifftn(f3) f4 = f4.real vx = f4 - torch.mean(f4) vy = patterson - torch.mean(patterson) cost = (torch.sum(vx * vy) / (torch.sqrt(torch.sum(torch.square(vx)) + epsilon) * torch.sqrt(torch.sum(torch.square(vy)) + epsilon))) return cost ``` The patterson and electron_density are all tensors. The error is: Traceback (most recent call last): File "20220418_draft_seblock_20.py", line 274, in <module> loss5 = fft_loss(x, yhat) File "20220418_draft_seblock_20.py", line 187, in fft_loss f1 = torch.fft.fftn(electron_density) RuntimeError: cuFFT error: HIPFFT_EXEC_FAILED I'm doubting the AMD GPU doesn't support some of the FFT module? The same script runs successfully on NVIDIA GPUs. Best, Sky ### Versions PyTorch version: 1.8.0a0+56b43f4 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 4.2.21155-37cb3a34 OS: CentOS Linux 7 (Core) (x86_64) GCC version: (GCC) 11.2.0 Clang version: 3.4.2 (tags/RELEASE_34/dot2-final) CMake version: version 2.8.12.2 Libc version: glibc-2.17 Python version: 3.8.9 (default, Jan 26 2022, 00:42:48) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] (64-bit runtime) Python platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: Vega 20 Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 3.27.5 MIOpen runtime version: 2.11.0 Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.22.1 [pip3] torch==1.8.0a0+56b43f4 [pip3] torchvision==0.9.0a0+01dfa8e [conda] Could not collect cc @ngimel @mruberry @peterbell10
1
5,928
76,070
NVFuser microbenchmark classifier - hash on memory formats
triaged, module: nvfuser
### 🚀 The feature, motivation and pitch in https://github.com/pytorch/benchmark/blob/main/torchbenchmark/util/classify_graphs.py, we should consider memory formats (e.g. stride orders) as part of the hash. This will separate fusion groups that have different memory formats. ### Alternatives _No response_ ### Additional context _No response_
0
5,929
76,069
`init_process_group` hanging on HPC multi-node system w GPU
oncall: distributed, triaged
### 🐛 Describe the bug Hi! I was trying to train the model with `torch.nn.parallel.DistributedDataParallel` on an HPC instance with 4 nodes, and 1 GPU per node. But the model verification program hangs for hours when encountered with `init_process_group`. Here is the program to reproduce the same error, I have tried to explore the previous issues(#53395) but no success till now. ```python import argparse import builtins import os import socket from contextlib import closing import torch import torch.distributed as dist from dartorch.data import DARDatasetOnVideos from dartorch.models import DrivingActionClassifier from torch.nn.parallel.distributed import DistributedDataParallel from torch.utils.data import DataLoader, DistributedSampler from torchinfo import summary def get_open_port(): with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s: s.bind(('', 0)) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) return s.getsockname()[1] def train_one_epoch(train_loader, model): for i, (data, target) in enumerate(train_loader): data, target = data.cuda(), target.cuda() cls = model(data) print(f'Input shape : {data.shape}, Output shape : {cls.shape}') break return if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument('data_dir', type=str, help='Data root directory.') parser.add_argument('--batch_size', type=int, default=1, help='Model batch size.') parser.add_argument('--im_size', type=int, default=128, help='Model image size(assumed square in size).') parser.add_argument('--nproc_dl', type=int, default=5, help='Number of workers to load data.') # DDP configuration parser.add_argument('--gpu', default=None, type=int) parser.add_argument('--world-size', default=-1, type=int, help='number of nodes for distributed training') parser.add_argument('--rank', default=-1, type=int, help='node rank for distributed training') parser.add_argument('--dist-url', default='env://', type=str, help='url used to set up distributed training') parser.add_argument('--dist-backend', default='nccl', type=str, help='distributed backend') parser.add_argument('--local_rank', default=-1, type=int, help='local rank for distributed training') args = parser.parse_args() port = get_open_port() os.environ['MASTER_ADDR'] = 'localhost' os.environ['MASTER_PORT'] = str(port) if "WORLD_SIZE" in os.environ: args.world_size = int(os.environ["WORLD_SIZE"]) args.distributed = args.world_size > 1 ngpus_per_node = torch.cuda.device_count() if args.distributed: if args.local_rank != -1: # for torch.distributed.launch args.rank = args.local_rank args.gpu = args.local_rank elif 'SLURM_PROCID' in os.environ: # for slurm scheduler args.rank = int(os.environ['SLURM_PROCID']) args.gpu = args.rank % torch.cuda.device_count() dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=args.world_size, rank=args.rank) # suppress printing if not on master gpu if args.rank!=0: def print_pass(*args): pass builtins.print = print_pass device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f'Target decvice : {device}') # Define model model = DrivingActionClassifier(in_size=(args.im_size, args.im_size)) # model_without_ddp = None # if args.distributed: # # For multiprocessing distributed, DistributedDataParallel constructor # # should always set the single device scope, otherwise, # # DistributedDataParallel will use all available devices. # if args.gpu is not None: # torch.cuda.set_device(args.gpu) # model.cuda(args.gpu) # model = DistributedDataParallel(model, device_ids=[args.gpu]) # model_without_ddp = model.module # else: # model.cuda() # model = DistributedDataParallel(model) # model_without_ddp = model.module # else: # raise NotImplementedError("Only DistributedDataParallel is supported.") # train_dataset = DARDatasetOnVideos(args.data_dir, test_user_ids = ["user_id_49381"], # img_size=(args.im_size, args.im_size), # load_reader_instances = True, # mode='train', shuffle_views=True) # train_sampler = DistributedSampler(train_dataset, shuffle=True) # train_loader = DataLoader( # train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None), # num_workers=args.nproc_dl, pin_memory=True, sampler=train_sampler, drop_last=False) for epoch in range(2): if args.distributed: print(f'Training on epoch : {epoch}') # train_loader.sampler.set_epoch(epoch) # train_one_epoch(train_loader, model) print(f'Test completed.') ``` ### Versions Collecting environment information... PyTorch version: 1.10.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: CentOS Linux 7 (Core) (x86_64) GCC version: (GCC) 7.3.0 Clang version: Could not collect CMake version: version 3.20.5 Libc version: glibc-2.17 Python version: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-3.10.0-1062.el7.x86_64-x86_64-with-centos-7.7.1908-Core Is CUDA available: True CUDA runtime version: 11.2.67 GPU models and configuration: GPU 0: Tesla V100-SXM2-16GB Nvidia driver version: 450.51.06 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] dartorch==0.0.0 [pip3] numpy==1.21.2 [pip3] torch==1.10.0 [pip3] torch-cluster==1.5.9 [pip3] torch-geometric==2.0.2 [pip3] torch-scatter==2.0.9 [pip3] torch-sparse==0.6.12 [pip3] torch-spline-conv==1.2.1 [pip3] torchaudio==0.10.0 [pip3] torchinfo==1.6.5 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.11.1 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] dartorch 0.0.0 dev_0 <develop> [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py37h7f8727e_0 [conda] mkl_fft 1.3.1 py37hd3c417c_0 [conda] mkl_random 1.2.2 py37h51133e4_0 [conda] numpy 1.21.2 py37h20f2e39_0 [conda] numpy-base 1.21.2 py37h79a1101_0 [conda] pyg 2.0.2 py37_torch_1.10.0_cu113 pyg [conda] pytorch 1.10.0 py3.7_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-cluster 1.5.9 py37_torch_1.10.0_cu113 pyg [conda] pytorch-mutex 1.0 cuda pytorch [conda] pytorch-scatter 2.0.9 py37_torch_1.10.0_cu113 pyg [conda] pytorch-sparse 0.6.12 py37_torch_1.10.0_cu113 pyg [conda] pytorch-spline-conv 1.2.1 py37_torch_1.10.0_cu113 pyg [conda] torchaudio 0.10.0 py37_cu113 pytorch [conda] torchinfo 1.6.5 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.11.1 py37_cu113 pytorch cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
7
5,930
76,047
NNC failing opinfo accuracy tests
NNC
### 🐛 Describe the bug These started failing after fixing the opinfo tests so that the NNC-fused kernels would actually run. See https://github.com/pytorch/pytorch/runs/6070298059?check_suite_focus=true for the full logs. Failing tests: * [ ] test_nnc_correctness_addcmul_cpu_int16 * [ ] test_nnc_correctness_addcmul_cpu_int32 * [ ] test_nnc_correctness_addcmul_cpu_int64 * [ ] test_nnc_correctness_addcmul_cpu_int8 * [ ] test_nnc_correctness_bool_channels_last_cpu_int8 * [ ] test_nnc_correctness_bool_cpu_int8 * [ ] test_nnc_correctness_frac_cpu_float32 * [ ] test_nnc_correctness_frac_cpu_float64 logs: ``` ====================================================================== FAIL [0.132s]: test_nnc_correctness_addcmul_cpu_int16 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Tensor-likes are not equal! Mismatched elements: 17 / 25 (68.0%) Greatest absolute difference: 8 at index (3, 1) Greatest relative difference: 0.047619047619047616 at index (4, 1) ====================================================================== FAIL [0.116s]: test_nnc_correctness_addcmul_cpu_int32 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Tensor-likes are not equal! Mismatched elements: 17 / 25 (68.0%) Greatest absolute difference: 8 at index (3, 1) Greatest relative difference: 0.047619047619047616 at index (4, 1) ====================================================================== FAIL [0.116s]: test_nnc_correctness_addcmul_cpu_int64 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Tensor-likes are not equal! Mismatched elements: 17 / 25 (68.0%) Greatest absolute difference: 8 at index (3, 1) Greatest relative difference: 0.047619047619047616 at index (4, 1) ====================================================================== FAIL [0.128s]: test_nnc_correctness_addcmul_cpu_int8 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Tensor-likes are not equal! Mismatched elements: 17 / 25 (68.0%) Greatest absolute difference: 8 at index (3, 1) Greatest relative difference: 0.1702127659574468 at index (3, 1) ====================================================================== FAIL [0.240s]: test_nnc_correctness_bool_channels_last_cpu_int8 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Tensor-likes are not equal! Mismatched elements: 31 / 36 (86.1%) Greatest absolute difference: 254 at index (0, 2, 0, 1) Greatest relative difference: 0.99[6078](https://github.com/pytorch/pytorch/runs/6070298059?check_suite_focus=true#step:9:6078)431372549 at index (0, 2, 0, 1) ====================================================================== FAIL [0.028s]: test_nnc_correctness_bool_cpu_int8 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Scalars are not equal! Absolute difference: 247 Relative difference: 0.9959677419354839 ====================================================================== FAIL [0.033s]: test_nnc_correctness_frac_cpu_float32 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Tensor-likes are not close! Mismatched elements: 11 / 20 (55.0%) Greatest absolute difference: 1.0 at index (0,) (up to 1e-05 allowed) Greatest relative difference: 37.067545712442445 at index (14,) (up to 1.3e-06 allowed) ====================================================================== FAIL [0.033s]: test_nnc_correctness_frac_cpu_float64 (__main__.TestNNCOpInfoCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 376, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 773, in test_wrapper return test(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1097, in wrapper fn(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 930, in only_fn return fn(slf, *args, **kwargs) File "test_jit_fuser_te.py", line 2583, in test_nnc_correctness self.assertEqual(ref, val) File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 2203, in assertEqual msg=msg, File "/opt/conda/lib/python3.7/site-packages/torch/testing/_comparison.py", line 1074, in assert_equal raise error_metas[0].to_error() AssertionError: Tensor-likes are not close! Mismatched elements: 14 / 20 (70.0%) Greatest absolute difference: 1.0 at index (0,) (up to 1e-07 allowed) Greatest relative difference: 35.59956005068432 at index (4,) (up to 1e-07 allowed) ---------------------------------------------------------------------- Ran 5295 tests in 1695.779s FAILED (failures=8, skipped=821, expected failures=45) Generating XML reports... Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-TestNNCOpInfoCPU-20220419001401.xml Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-jit.test_fuser_common.TestFuserCommon-20220419001401.xml Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-TestLoopnestRandomizationCPU-20220419001401.xml Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-TestTEFuserDynamic-20220419001401.xml Generated XML report: test-reports/python-unittest/test_jit_fuser_te/TEST-TestTEFuserStatic-20220419001401.xml Traceback (most recent call last): File "test/run_test.py", line 1058, in <module> main() File "test/run_test.py", line 1036, in main raise RuntimeError(err_message) RuntimeError: test_jit_fuser_te failed! ``` ### Versions master, CI cc @ZolotukhinM
0
5,931
76,043
RuntimeError: bucket_count == per_bucket_sizes.size() INTERNAL ASSERT FAILED
module: cuda, triaged, module: ddp
### 🐛 Describe the bug When training with 4GPUs by using DDP, my program have the following errors: ``` File "anaconda3/envs/tnt2/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 993, in forward if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): RuntimeError: bucket_count == per_bucket_sizes.size() INTERNAL ASSERT FAILED at "../torch/csrc/distributed/c10d/reducer.cpp":963, please report a bug to PyTorch ``` Other information: 1. If I train the model with batch size 1 on each GPU, the model will throw the above error 2. If I train the model with batch size 10 on each GPU, the model will be blocked at `loss.backward()` with 100% CPU utilization. 3. If I do not use DDP, the model can be trained successfully with 1 single GPU. ### Versions Collecting environment information... PyTorch version: 1.12.0.dev20220419+cu113 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Debian GNU/Linux 10 (buster) (x86_64) GCC version: (Debian 8.3.0-6) 8.3.0 Clang version: 7.0.1-8+deb10u2 (tags/RELEASE_701/final) CMake version: version 3.13.4 Libc version: glibc-2.28 Python version: 3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0] (64-bit runtime) Python platform: Linux-5.4.173.1.amd64-smp-x86_64-with-glibc2.10 Is CUDA available: True CUDA runtime version: 11.2.67 GPU models and configuration: GPU 0: Quadro RTX 8000 GPU 1: Quadro RTX 8000 GPU 2: Quadro RTX 8000 GPU 3: Quadro RTX 8000 Nvidia driver version: 470.86 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.19.5 [pip3] torch==1.12.0.dev20220419+cu113 [pip3] torchaudio==0.12.0.dev20220419+cu113 [pip3] torchvision==0.13.0.dev20220419+cu113 [conda] numpy 1.19.5 pypi_0 pypi [conda] torch 1.12.0.dev20220419+cu113 pypi_0 pypi [conda] torchaudio 0.12.0.dev20220419+cu113 pypi_0 pypi [conda] torchvision 0.13.0.dev20220419+cu113 pypi_0 pypi cc @ngimel
0
5,932
76,040
SSL certificate error: urlopen error [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)>
triaged
### 🐛 Describe the bug I'm facing SSL error while downloading the 'resnext101_32x16d_swsl' model from torch.hub.load('facebookresearch/semi-supervised-ImageNet1K-models', 'resnext101_32x16d_swsl') ``` error message: URLError: <urlopen error [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1091)> ``` ### Versions [pip3] numpy==1.21.5 [pip3] torch==1.10.0 [pip3] torchaudio==0.10.0+cu111 [pip3] torchsummary==1.5.1 [pip3] torchtext==0.11.0 [pip3] torchvision==0.11.1+cu111
1
5,933
76,039
[RFC] NPU device for PyTorch
oncall: distributed
### 🚀 The feature, motivation and pitch NPU is device abstraction of Ascend Neural Networks Process Units. NPU works as a normal torch device like CPU, GPU and XPU, so PyTorch users can easily migrate model scripts to perform model training/inference on NPU. The enabling of NPU contains two parts: 1. registering a new 'NPU' device type to PyTorch, which is the focus of this RFC; 2. implementation of PyTorch ops and runtime of NPU, which has be supported via a PyTorch extension, i.e. Ascend extension for PyTorch(https://github.com/Ascend/pytorch). PyTorch users can load this extension to enable the full functionality of NPU, similar to what other PyTorch extensions do. ### Alternatives _No response_ ### Additional context Work with NPU: PyTorch users interface the NPU device as a normal device but need to import torch_npu for full functionality. ``` import torch import torch_npu # Ascend Extension for PyTorch npu = torch.device('npu') input = torch.randn([100]).to(npu) model.to(npu) output = model(input) ``` cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
2
5,934
76,037
__torch_function__ and generator input hazard
triaged, module: __torch_function__
### 🐛 Describe the bug Generators in Python are a one-shot business: you can iterate through them once, and then that's it; they'll silently return an empty list upon subsequent iterations. Generators cause trouble for `__torch_function__`, where we may need to iterate through a list multiple times to detect Tensor subclass arguments. In general, list-accepting PyTorch functions that are bound from C don't accept iterable arguments. However, poorly written userland code can accidentally iterate over iterable arguments multiple times. I committed this mistake with the implementation of torch.autograd.grad: ``` t_outputs = cast(Tuple[torch.Tensor, ...], (outputs,) if is_tensor_like(outputs) else tuple(outputs)) t_inputs = cast(Tuple[torch.Tensor, ...], (inputs,) if is_tensor_like(inputs) else tuple(inputs)) overridable_args = t_outputs + t_inputs if has_torch_function(overridable_args): return handle_torch_function( grad, overridable_args, outputs, inputs, grad_outputs=grad_outputs, retain_graph=retain_graph, create_graph=create_graph, only_inputs=only_inputs, allow_unused=allow_unused, is_grads_batched=is_grads_batched, ) ``` It is necessary to iterate over the inputs/outputs to detect tensor subclasses, but doing so will destroy an iterator. Thus, we must use the result of the iterator to actually implement the function in question; we cannot preserve the arguments as is. ### Versions master cc @hameerabbasi @rgommers @peterbell10
0
5,935
76,031
Computer using CPU instead of GPU nvidia with CUDA
module: cuda, triaged
### 🐛 Describe the bug Using the YoloV5, it seems that it is using entirely the CPU and not the GPU, and I can't seem to understand why... I have already opened an issue in the YoloV5 repository (https://github.com/ultralytics/yolov5/issues/7277), but it seems to be a PyTorch problem... Can someone help me? Here is the problem and some videos. I have been working with yolov5 for quite some time now, and I always used google colab. But since they are implementing limitations, I have decided to buy an eGPU and an Nvidia 3080ti to train my networks. I successfully installed the drivers and can use de GPU for other software. I can also use the GPU for running a trained network, using yolo detection.py and even using my code based on the PyTorch library. For some reason, I can't figure out why I can't seem to use my GPU for training. When I use the train.py from yolov5 it doesn't use my Cuda Nvidia GPU. I have tried multiple versions of python, multiple versions of Cuda, and multiple versions of PyTorch (LTS, stable, nightly) and I still can't figure it out. When I run the train.py file it detects that I have a 3080ti, but when it starts the epochs, my GPU usage is 0% and my CPU usage jumps to 90%, so it is using my CPU. And here I leave a couple of videos demonstrating the problem. Even thought I'm using YoloV5 the problem seems to be in pytroch, can someone please help me? ### Versions Collecting environment information... PyTorch version: 1.11.0+cpu Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 Pro GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.9.12 (tags/v3.9.12:b28265d, Mar 23 2022, 23:52:46) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.19044-SP0 Is CUDA available: False CUDA runtime version: 11.3.58 GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] torch==1.11.0 [pip3] torchaudio==0.11.0+cu113 [pip3] torchvision==0.12.0 [conda] Could not collect cc @ngimel
11
5,936
76,030
Dirichlet with small concentration
module: distributions, triaged
### 🐛 Describe the bug When sampling from Dirichlet distribution with a low (<<1) concentration, it returns in about 1/K of the times (K=number of categories) the vector 1/K*(1,1,1,...) i.e (1/3,1/3,1/3) for 3 categories. This shouldn't be the case as the samples are supposed to be concentrated on the vertices of the simplex. Example code: ```python import torch from torch import distributions a = torch.ones(3)*1e-2 b = torch.ones(3)*1e-3 f=distributions.Dirichlet(a) g=distributions.Dirichlet(b) f.sample((10,)).max(-1) g.sample((10,)).max(-1) # here is the bug ``` ### Versions --2022-04-19 12:19:37-- https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py Resolving raw.githubusercontent.com... 185.199.108.133, 185.199.110.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 17051 (17K) [text/plain] Saving to: ‘collect_env.py’ collect_env.py 100%[======================================================================================================================>] 16.65K --.-KB/s in 0s 2022-04-19 12:19:37 (54.9 MB/s) - ‘collect_env.py’ saved [17051/17051] cc @fritzo @neerajprad @alicanb @nikitaved
2
5,937
76,029
Mobile assets upload could break third party mirrors due to binary data size
oncall: mobile
### 🐛 Describe the bug https://github.com/pytorch/pytorch/pull/74947 added mobile assets which seem to be used in mobile unit testing via https://github.com/pytorch/pytorch/pull/74793. The total size increase using these binary files is ~88MB, which sounds undesirable in a source code repository, given these files are used for testing (please correct me if I'm wrong about their usage). `GitLab` has a default max. size for binary files of 10MB and will break if you try to rebase your fork with the current PyTorch master. A workaround could be to rewrite the git history via a similar command to: ``` git filter-branch --force --index-filter \ 'git rm --cached --ignore-unmatch path/to/ceo.jpg' \ --prune-empty --tag-name-filter cat -- --all ``` which is also problematic as a simple rebase is not possible anymore and you might end up re-writing the history in each rebase. Another workaround is to increase the size limit of your mirror to be able to fit the largest binary file of ~45MB. Could we please host model checkpoints in 3rd party mirrors as is done in e.g. `torchvision` in `https://download.pytorch.org/models/` or any other location outside of this repository? Reverting this commit or removing the files without re-writing the git history would not solve the original issue, as these files would still be in the history and `git push` would still fail. CC @malfet @atalman @linbinyu ### Versions All versions after https://github.com/pytorch/pytorch/pull/74947
0
5,938
76,028
A bug in instructions for building PyTorch with ASAN
module: docs, module: ci, triaged, topic: bug fixes
Following instructions in https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md#building-pytorch-with-asan , the build fails at the CMake step: ``` <snip> -- Performing Test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- Performing Test HAVE_STD_REGEX -- compiled but failed to run -- Performing Test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Performing Test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- Performing Test HAVE_POSIX_REGEX -- compiled but failed to run CMake Error at third_party/benchmark/CMakeLists.txt:283 (message): Failed to determine the source files for the regular expression backend ``` The problem is in https://github.com/pytorch/pytorch/blob/bd7e99cbb9e7980b89c26ae3fa5596f6e4aaebc4/CONTRIBUTING.md?plain=1#L1150 where `CXX_FLAGS` is spelled wrong and `-shared-libasan` is missing. A fix is to replace the above line with: ``` CXXFLAGS="-shared-libasan -pthread" \ ``` On the other hand, this seems to work on CI https://github.com/pytorch/pytorch/blob/bd7e99cbb9e7980b89c26ae3fa5596f6e4aaebc4/.jenkins/pytorch/build-asan.sh#L39 but [speculations follow] that could be related to the misspelling of `CXX_FLAGS` and the build scripts picks up `CXXFLAGS` from the environment.. cc @brianjo @mruberry @seemethere @malfet @pytorch/pytorch-dev-infra
0
5,939
76,026
Jit torchscript for prediction is missing 'forward' when using forward hooks
oncall: jit
### 🐛 Describe the bug I'm using forward hooks to extract layer values from a pre-trained CNN and use them as features for my model. I also want to use torchscript for inference. The problem is that when I try to export any other method than 'forward' I get an error that 'forward' is missing for the registered forward hooks. I have a minimal example: ```python from typing import Iterable, Callable, Tuple from torch import Tensor, nn, ones, jit, empty from torchvision.models import resnet50 class FeatureExtractor(nn.Module): def __init__(self, model: nn.Module, layers: Iterable[str]): super().__init__() self.model = model self.layers = layers for layer_id in layers: layer = dict([*self.model.named_modules()])[layer_id] layer.register_forward_hook(self.save_outputs_hook(layer_id)) def save_outputs_hook(self, layer_id: str) -> Callable: def fn(_, input: Tuple[Tensor], output): print('Hi') return fn def forward(self, x: Tensor): return self.model(x) @jit.export def predict(self, x: Tensor): return self.model(x) if __name__ == '__main__': dummy_input = ones(10, 3, 224, 224) resnet_features = FeatureExtractor(resnet50(), layers=["layer4", "avgpool"]) features = resnet_features(dummy_input) script = jit.trace(resnet_features, dummy_input) ``` This fails with: ``` RuntimeError: Couldn't find method: 'forward' on class: '__torch__.torch.nn.modules.container.___torch_mangle_141.Sequential (of Python compilation unit at: 0x7fdc5a676da8)' ``` If I deregister the hooks or export forward instead of predict this of course runs without problem. ### Versions PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 11.6 (x86_64) GCC version: Could not collect Clang version: 13.0.0 (clang-1300.0.29.30) CMake version: Could not collect Libc version: N/A Python version: 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)] (64-bit runtime) Python platform: macOS-11.6-x86_64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] pytorch-lightning==1.6.0 [pip3] torch==1.11.0 [pip3] torchmetrics==0.7.3 [pip3] torchvision==0.12.0 [conda] Could not collect
0
5,940
76,025
Numerical instability: matrix multiplication got different results on cpu and gpu
triaged, module: tf32
### 🐛 Describe the bug matrix multiplication got different results on cpu and gpu, `5.5665e-06` difference. Here is a minimal example ```python import torch srf = [[0.00784314, 0.00492611, 0.02788845], [0.00392157, 0.00492611, 0.03984064], [0.00392157, 0.00492611, 0.05976096], [0.00392157, 0.00492611, 0.07569721], [0.00392157, 0.00492611, 0.09960159], [0.00392157, 0.00492611, 0.11553785], [0. , 0.00985222, 0.11952191], [0. , 0.01970443, 0.11553785], [0. , 0.02955665, 0.10756972], [0. , 0.03940887, 0.0876494], [0. , 0.05418719, 0.06374502], [0. , 0.07881773, 0.03585657], [0. , 0.09359606, 0.00796813], [0. , 0.10344828, 0.], [0. , 0.09852217, 0.], [0. , 0.08866995, 0.], [0.00784314, 0.07881773, 0.], [0.02352941, 0.06896552, 0.], [0.04313725, 0.05418719, 0.], [0.06666667, 0.03448276, 0.], [0.08235294, 0.02463054, 0.00398406], [0.08627451, 0.01477833, 0.00398406], [0.08235294, 0.00985222, 0.00398406], [0.07843137, 0.00985222, 0.00398406], [0.07843137, 0.00492611, 0.00398406], [0.0745098 , 0.00492611, 0.00398406], [0.0745098 , 0.00985222, 0.00398406], [0.07058824, 0.00985222, 0.00398406], [0.07058824, 0.00985222, 0.00398406], [0.06666667, 0.00985222, 0.00398406], [0.06666667, 0.00985222, 0.00398406],] srf_cpu = torch.tensor(srf) srf_cuda = torch.tensor(srf).cuda() a = srf_cpu @ srf_cpu.T b = srf_cuda @ srf_cuda.T b = b.cpu() print(torch.allclose(a,b)) # False print((a-b).abs().max()) # tensor(5.5665e-06) ``` ### Versions pytorch 1.10.1 cudatoolkit 11.3.1 cc @zasdfgbnm @ptrblck
4
5,941
76,024
The prediction results of different equipment are inconsistent
needs reproduction, module: nn, triaged
### 🐛 Describe the bug I trained a RESNET classification network in torchvision in the environment of server (Titan XP GPU), with an accuracy of more than 90%. I hope I can reason on my own laptop (Windows). But the reasoning result of the notebook has always been Nan. The reasoning result on the server side is normal. In order to find the problem, I checked the model weight and input image in detail. When they are exactly the same, the prediction results of personal notebook (Windows) and server are inconsistent, and the prediction result of personal computer is Nan. Then I was in my notebook Inference code: from ipdb import set_trace from torchvision import models from torchvision import transforms import torch import torch.nn as nn from PIL import Image import random import os #load the trained model to classify the images model_file = './model_18.pth' model = torch.load(model_file).cuda().eval() classes = { 0:'ants',1:'bees'} # 数据转换 valid_tf = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) file_path = 'test.jpg' image = Image.open(file_path) image = image.convert(mode='RGB') img_data = torch.unsqueeze(valid_tf(image),0).cuda() pred = model(img_data) print(pred) Forecast results: notebook: "tensor([[nan, nan]], device='cuda:0', grad_fn=<AddmmBackward0>)" server: "tensor([[-0.2480, -0.0757]], device='cuda:0', grad_fn=<AddmmBackward0>)" ### Versions (pytorch) PS D:\deeplearning\咸鱼文件\classification> python .\collect_env.py Collecting environment information... PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 专业版 GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 05:59:00) [MSC v.1929 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.22000-SP0 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050 Nvidia driver version: 512.15 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] torch==1.11.0 [pip3] torchaudio==0.11.0 [pip3] torchvision==0.12.0 [conda] blas 1.0 mkl http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free [conda] cudatoolkit 11.3.1 h59b6b97_2 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main [conda] libblas 3.9.0 14_win64_mkl http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge [conda] libcblas 3.9.0 14_win64_mkl http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge [conda] liblapack 3.9.0 14_win64_mkl http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge [conda] mkl 2022.0.0 h0e2418a_796 http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge [conda] numpy 1.22.3 py38h5ed9b9d_2 http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge [conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchaudio 0.11.0 py38_cu113 pytorch [conda] torchvision 0.12.0 py38_cu113 pytorch cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
4
5,942
76,013
test_jit.py TestWarn.test_warn and friends doesn't work under pytest
oncall: jit
### 🐛 Describe the bug ``` pytest test/test_jit.py -k test_warn ``` gives ``` __________________________ TestWarn.test_warn_once_per_func ___________________________ Traceback (most recent call last): File "/data/users/ezyang/pytorch-tmp/test/jit/test_warn.py", line 93, in test_warn_on ce_per_func FileCheck() \ RuntimeError: Expected to find "UserWarning: I am warning you" but did not find it Searched string: From CHECK-COUNT-2: UserWarning: I am warning you ______________________ TestWarn.test_warn_once_per_func_in_loop _______________________ Traceback (most recent call last): File "/data/users/ezyang/pytorch-tmp/test/jit/test_warn.py", line 117, in test_warn_once_per_func_in_loop FileCheck() \ RuntimeError: Expected to find "UserWarning: I am warning you" but did not find it Searched string: From CHECK-COUNT-2: UserWarning: I am warning you ____________________________ TestWarn.test_warn_only_once _____________________________ Traceback (most recent call last): File "/data/users/ezyang/pytorch-tmp/test/jit/test_warn.py", line 50, in test_warn_on ly_once FileCheck() \ RuntimeError: Expected to find "UserWarning: I am warning you" but did not find it Searched string: From CHECK-COUNT-1: UserWarning: I am warning you ``` The warnings then show up in warnings summary so probably there's something going on with pytest's installed warning handler ### Versions master
1
5,943
76,012
torch.nn.LayerNorm is very slow on GPU (much slower than a custom LayerNorm version in the ConvNext model)
module: nn, module: cuda, triaged
### 🐛 Describe the bug I found that for a (B, C, H, W) tensor, `nn.LayerNorm` is much slower (0.088s w/o permute and 0.14s with necessary permute) than the custom LayerNorm version for the ConvNext model (0.04s) `https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119`, while both results are equivalent (up to numerical errors). ``` import numpy as np import torch import torch.nn as nn import time ln_time = 0.0 ln_ln_time = 0.0 ln_permute1_time = 0.0 ln_permute2_time = 0.0 custom_ln_time = 0.0 bn_time = 0.0 T = 8 device = 'cuda:0' size=(16,256,512,128) weight = torch.rand(size[1]).to(device) bias = torch.randn(size[1]).to(device) class LayerNorm(nn.Module): r""" LayerNorm that supports two data formats: channels_last (default) or channels_first. The ordering of the dimensions in the inputs. channels_last corresponds to inputs with shape (batch_size, height, width, channels) while channels_first corresponds to inputs with shape (batch_size, channels, height, width). """ def __init__(self, normalized_shape, eps=1e-8, data_format="channels_last"): super().__init__() self.weight = nn.Parameter(weight.clone()) self.bias = nn.Parameter(bias.clone()) self.eps = eps self.data_format = data_format if self.data_format not in ["channels_last", "channels_first"]: raise NotImplementedError self.normalized_shape = (normalized_shape, ) def forward(self, x): if self.data_format == "channels_last": return F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) elif self.data_format == "channels_first": u = x.mean(1, keepdim=True) s = (x - u).pow(2).mean(1, keepdim=True) x = (x - u) / torch.sqrt(s + self.eps) x = self.weight[:, None, None] * x + self.bias[:, None, None] return x ln = nn.LayerNorm(size[1], eps=1e-8).to(device) ln.weight = nn.Parameter(weight.clone()) ln.bias = nn.Parameter(bias.clone()) ln_custom = LayerNorm(size[1], data_format="channels_first").to(device) a = torch.randn(size).to(device) b = a.permute(0,2,3,1).contiguous() ln_a = ln(b) ln_a = ln_a.permute(0,3,1,2).contiguous() ln_custom_a = ln_custom(a) print("|ln - ln_custom| max", torch.max(torch.abs(ln_a - ln_custom_a))) del a,b ln_custom = LayerNorm(size[1], data_format="channels_first").to(device) for t in range(T): print(t) a = torch.randn(size).to(device) torch.cuda.synchronize() tt = time.time() b=ln_custom(a) torch.cuda.synchronize() custom_ln_time += time.time() - tt del b print("custom_ln_time", custom_ln_time / T) ln = nn.LayerNorm(size[1]).to(device) ln.weight = nn.Parameter(weight.clone()) ln.bias = nn.Parameter(bias.clone()) for t in range(T): print(t) a = torch.randn(size).to(device) torch.cuda.synchronize() tt = time.time() b=a.permute(0,2,3,1).contiguous() torch.cuda.synchronize() ln_permute1_time += time.time() - tt tt=time.time() b=ln(b) torch.cuda.synchronize() ln_permute2_time += time.time() - tt tt=time.time() b=b.permute(0,3,1,2).contiguous() torch.cuda.synchronize() ln_ln_time += time.time() - tt del b print("pytorch ln_ln_time", ln_ln_time / T, "ln_permute1_time", ln_permute1_time / T, "ln_permute2_time", ln_permute2_time / T) print("pytorch ln_total_time", (ln_ln_time + ln_permute1_time + ln_permute2_time) / T) ``` Output: ``` |ln - ln_custom| max tensor(1.4305e-06, device='cuda:0', grad_fn=<MaxBackward1>) 0 1 2 3 4 5 6 7 custom_ln_time 0.0418052077293396 0 1 2 3 4 5 6 7 pytorch ln_ln_time 0.08832478523254395 ln_permute1_time 0.017529428005218506 ln_permute2_time 0.03777831792831421 pytorch ln_total_time 0.14363253116607666 ``` ### Versions PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.3.58 GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti GPU 1: NVIDIA GeForce GTX 1080 Ti Nvidia driver version: 470.103.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] numpy-quaternion==2022.4.1 [pip3] pytorch3d==0.3.0 [pip3] torch==1.11.0 [pip3] torchaudio==0.11.0 [pip3] torchsparse==1.4.0.dev0 [pip3] torchvision==0.12.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.22.3 pypi_0 pypi [conda] numpy-base 1.21.2 py38h79a1101_0 [conda] numpy-quaternion 2022.4.1 pypi_0 pypi [conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] pytorch3d 0.3.0 pypi_0 pypi [conda] torchaudio 0.11.0 py38_cu113 pytorch [conda] torchsparse 1.4.0.dev0 pypi_0 pypi [conda] torchvision 0.12.0 py38_cu113 pytorch cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345 @ngimel
4
5,944
76,011
backcompat tests in test_nn.py are slow
module: nn, module: tests, triaged
### 🐛 Describe the bug My test runner gets visibly stuck on them when they execute. They take more than a minute to execute. ### Versions master cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
0
5,945
76,007
Build a default NVFuser comparison callback, e.g. for use with torchbench
triaged, module: nvfuser
### 🚀 The feature, motivation and pitch A default comparison callback would make it a lot easier to start using the callback. Callback functionality was added in https://github.com/pytorch/pytorch/pull/74361. ### Alternatives _No response_ ### Additional context _No response_
0
5,946
75,986
gql_mocks.json has really long lines
triaged, better-engineering
### 🐛 Describe the bug This file has really long lines, which means that whenever I `git grep` that hits this file it's very annoying to scroll through all the results. Can we somehow split it up? ### Versions master
0
5,947
75,984
DISABLED test_zero_model_parallel_parameters_as_bucket_view_True (__main__.TestZeroRedundancyOptimizerDistributed)
oncall: distributed, module: rocm, skipped
Platforms: rocm This test was disabled because it is failing flakily on master ([recent examples](http://torch-ci.com/failure/test_zero_model_parallel_parameters_as_bucket_view_True%2C%20TestZeroRedundancyOptimizerDistributed)). Culprit could be related to https://github.com/pytorch/pytorch/pull/75753 @pritamdamania cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
2
5,948
75,982
API to determine if a torch.return_type is a "structseq"
feature, triaged
## Motivation A "structseq" is this weird Python-C API thing used to create namedtuple-like types from C++. https://github.com/pytorch/pytorch/pull/75915 introduces some logic that ideally would grab all "structseq" classes from torch.return_types and register them as PyTree nodes. However, we didn't know how to actually check if something is a structseq or not (the [documentation](https://docs.python.org/3/c-api/tuple.html?highlight=structseq#struct-sequence-objects) doesn't provide any helper functions for that). If possible we should write an API that can check if a type is a "structseq" and then use it in https://github.com/pytorch/pytorch/pull/75915
1
5,949
75,963
Add build support for GCC 11.2
needs reproduction, module: build, triaged
My build from source fails with GCC 11.2, is it possible to support that? Thanks cc @malfet @seemethere
1
5,950
75,960
jit/_trace.py", line 71, in _unique_state_dict filtered_dict[k] = v.detach() AttributeError: 'torch.dtype' object has no attribute 'detach'
oncall: jit
### 🐛 Describe the bug Try exporting torch.fx quantized model to onnx get this error: ```jit/_trace.py", line 71, in _unique_state_dict filtered_dict[k] = v.detach() AttributeError: 'torch.dtype' object has no attribute 'detach' ``` From discussion forum, it was claimed quantized model export to onnx already supported, but I still get this error on a single res18 model from torchvision. ### Versions torch.1.11
2
5,951
75,956
[JIT] [Autocast] JIT Autocast Pass operations' list should be extendable and consistent with imperative path
oncall: jit
### Tasks: - [ ] Make autocast ops extensible, e.g. via a function that modifies a global list of ops & their behavior, **per device** - [ ] To better copy eager behavior, we could try using codegen to generate both eager and jit autocasting rules ### 🐛 Describe the bug Currently, the operations' list is hardcoded in `JIT Autocast pass` by different categories (https://github.com/pytorch/pytorch/blob/ef50186a7d05cd65fc605598480b52375cadd4b2/torch/csrc/jit/passes/autocast.cpp#L322-L440). Regardless of the limitation mentioned in the document (https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/JIT-AUTOCAST.md). We think the `JIT Autocast operation list` should aligne with `Imperative path Autocast`: 1. The imperative path Autocast Operation list is registered by `Autocast` dispatch key, which means it's extensible by third party to register same operation into different operation list (For the case, third party such as IPEX has optimization on this operation). So `JIT Autocast operation` list should aligned with `Imperative path Autocast` to be extensible. 2. The imperative path Autocast Operation list is distinguishable by different devices by dispatch key of `Autocast` and `AutocastCPU`. The JIT Autocast should also align with it. If the `JIT Autocast operation list` is not aligned with `Imperative path Autocast`, we will probably see the issue when using Autocast with JIT Trace with third party such as IPEX. Take the below case for example: ``` import torch import torch.nn as nn import intel_extension_for_pytorch as ipex class AtenSoftmaxRepalce(nn.Module): def __init__(self, dim=-1): super(AtenSoftmaxRepalce, self).__init__() self.conv = torch.nn.Conv2d(3, 64, (3, 3), stride=(2, 2), padding=(1, 1), bias=False) self.softmax = torch.nn.Softmax(dim) def forward(self, x): return self.softmax(x) model = AtenSoftmaxRepalce() model.eval() x = torch.rand(1, 3, 224, 224).to(torch.bfloat16) with torch.no_grad(): with torch.cpu.amp.autocast(cache_enabled=False): model = torch.jit.trace(model, x).eval() model = torch.jit.freeze(model) print("------start the run--------------") print(model.graph_for(x)) print(model(x).dtype) ``` In IPEX, we have optimization on `Softmax` to support bf16 input and output. So we extend the imperative path Autocast operation list by change `softmax` from `black list` into `fall through list` inside IPEX. However, after going through the `Autocast JIT Pass`, it will insert the `aten::_autocast_to_full_precision` node to cast the `bf16 input` into `fp32` which changes the original graph generated by `jit trace`. Here is the new graph after we enable the `Autocast JIT Pass`: ``` graph(%self.1 : __torch__.___torch_mangle_3.AtenSoftmaxRepalce, %x : Tensor): %2 : int = prim::Constant[value=-1](), scope: __module.softmax # /home/lesliefang/pytorch_1_7_1/ssd-rn34/frameworks.ai.pytorch.private-cpu/torch/nn/functional.py:1783:0 %3 : NoneType = prim::Constant(), scope: __module.softmax %4 : bool = prim::Constant[value=0]() %5 : bool = prim::Constant[value=1]() %8 : Tensor = prim::profile[profiled_type=BFloat16(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu), seen_none=0](%x) %6 : Tensor = aten::_autocast_to_full_precision(%8, %4, %5) %9 : Tensor = prim::profile[profiled_type=Float(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu), seen_none=0](%6) %7 : Tensor = aten::softmax(%9, %2, %3), scope: __module.softmax # /home/lesliefang/pytorch_1_7_1/ssd-rn34/frameworks.ai.pytorch.private-cpu/torch/nn/functional.py:1783:0 %10 : Tensor = prim::profile[profiled_type=Float(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu), seen_none=0](%7) = prim::profile() return (%10) ``` Here is the original graph if we disable the `Autocast JIT Pass`: ``` graph(%self.1 : __torch__.___torch_mangle_3.AtenSoftmaxRepalce, %x : Tensor): %2 : int = prim::Constant[value=-1](), scope: __module.softmax # /home/lesliefang/pytorch_1_7_1/ssd-rn34/frameworks.ai.pytorch.private-cpu/torch/nn/functional.py:1783:0 %3 : NoneType = prim::Constant(), scope: __module.softmax %5 : Tensor = prim::profile[profiled_type=BFloat16(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu), seen_none=0](%x) %4 : Tensor = aten::softmax(%5, %2, %3), scope: __module.softmax # /home/lesliefang/pytorch_1_7_1/ssd-rn34/frameworks.ai.pytorch.private-cpu/torch/nn/functional.py:1783:0 %6 : Tensor = prim::profile[profiled_type=BFloat16(1, 3, 224, 224, strides=[150528, 50176, 224, 1], requires_grad=0, device=cpu), seen_none=0](%4) = prim::profile() return (%6) torch.bfloat16 ``` ### Versions And we are using stock pytorch with commit: https://github.com/pytorch/pytorch/commit/123297a8c02e0ebbf8b0ae3d3cb16d0dc2e350b4.
4
5,952
75,949
Potential memory leak in Adam optimizer in AMD chips (CPU)
needs reproduction, module: optimizer, module: rocm, module: memory usage, triaged
### 🐛 Describe the bug Adam optimizer seems to be leaking memory(?) in AMD chips. A minimal example doing some simple training can be found here: https://gist.github.com/lingchunkai/dbb0c001a2fca1273487d69ac04c2f7e After 5 million iterations, virtual memory usage rises to 70804240 (kb) and resident memory to 54691068 (kb). When using CUDA on the same machine, no such growth is seen and we use around 2GB of GPU memory and roughly 14617576 and 4707448 memory of virtual and resident memory. Both virtual and resident memory are seen to increase seemingly without bounds. I have done some simple testing and it seems RMSProp suffers from the same issue. I am using an AMD Ryzen Threadripper 1950X 16-Core Processor, with python 3.9.7 and pytorch version 1.10.1. Using the same python and pytorch version and running on an Intel machine does not yield the same problem. I have also tested on my Mac which has a M1 chip, and the memory usage seems to be stable. ### Versions PyTorch version: 1.10.1 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Nvidia driver version: 495.29.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.3 [pip3] torch==1.10.1 [pip3] torch-scatter==2.0.9 [pip3] torchaudio==0.10.1 [pip3] torchvision==0.11.2 [conda] _pytorch_select 0.1 cpu_0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py39h7f8727e_0 [conda] mkl_fft 1.3.1 py39hd3c417c_0 [conda] mkl_random 1.2.2 py39h51133e4_0 [conda] numpy 1.21.3 pypi_0 pypi [conda] numpy-base 1.21.2 py39h79a1101_0 [conda] pytorch 1.10.1 py3.9_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] pytorch-scatter 2.0.9 py39_torch_1.10.0_cu115 pyg [conda] torchaudio 0.10.1 py39_cu113 pytorch [conda] torchvision 0.11.2 py39_cu113 pytorch cc @vincentqb @jbschlosser @albanD @jeffdaily @sunway513 @jithunnair-amd @ROCmSupport @KyleCZH
3
5,953
75,943
FSDP remove the requirement of all trainable parameters
high priority, oncall: distributed, triaged, module: fsdp
### 🚀 The feature, motivation and pitch There are some cases that the model may have some fixed/untrainable parameters which should not be updated (. In this case, the FSDP will raise the exception at https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/flatten_params_wrapper.py#L363 assert (len(set(p.requires_grad for p in params)) == 1 ), "expects all parameters to have same requires_grad" ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @gchanan @zou3519 @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @pietern @SciPioneer
12
5,954
75,940
Add nesting of nested Tensor
triaged, module: nestedtensor
### 🚀 The feature, motivation and pitch Currently the nested tensor implementation isn't very nested. The `nested_tensor` function allows for a list of `Tensor`s but not a list of `NestedTensor`: ``` >>> import torch >>> torch.__version__ '1.12.0.dev20220406+cu113' >>> a = torch.rand([10]) >>> b = torch.rand([8]) >>> nested = torch.nested_tensor([a, b]) >>> c = torch.rand([4]) >>> d = torch.rand([7]) >>> nested2 = torch.nested_tensor([c, d]) >>> double_nested = torch.nested_tensor([nested, nested2]) /usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.9) or chardet (3.0.4) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported " Traceback (most recent call last): File "<stdin>", line 1, in <module> NotImplementedError: Could not run 'aten::nested_tensor' with arguments from the 'NestedTensor' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::nested_tensor' is only available for these backends: [Dense, FPGA, ORT, Vulkan, Metal, Meta, Quantized, CustomRNGKeyId, MkldnnCPU, Sparse, SparseCsrCPU, SparseCsrCUDA, NestedTensor, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, FuncTorchDynamicLayerBackMode, ADInplaceOrView, AutogradOther, AutogradFunctionality, AutogradNestedTensor, Tracer, AutocastCPU, Autocast, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, Functionalize, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, TESTING_ONLY_GenericWrapper, TESTING_ONLY_GenericMode, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, CPU, CUDA, HIP, XLA, MLC, XPU, VE, Lazy, PrivateUse1, PrivateUse2, PrivateUse3, UNKNOWN_TENSOR_TYPE_ID, QuantizedCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID]. Undefined: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] CPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] CUDA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] HIP: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] XLA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] MLC: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] XPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] HPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] VE: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] Lazy: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] PrivateUse1: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] PrivateUse2: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] PrivateUse3: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] FPGA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] ORT: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] Vulkan: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] Metal: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] Meta: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] QuantizedCPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] QuantizedCUDA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] QuantizedXPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] CustomRNGKeyId: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] MkldnnCPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] SparseCPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] SparseCUDA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] SparseHIP: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] SparseXPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] SparseVE: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] SparseCsrCPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] SparseCsrCUDA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:113 [backend fallback] Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback] Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback] ZeroTensor: registered at ../aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback] AutogradOther: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradCPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradCUDA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradXLA: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradMLC: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradXPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradHPU: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] UNKNOWN_TENSOR_TYPE_ID: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradLazy: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradPrivateUse1: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradPrivateUse2: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] AutogradPrivateUse3: registered at aten/src/ATen/RegisterCompositeImplicitAutograd.cpp:10423 [math kernel] Tracer: registered at ../torch/csrc/autograd/generated/TraceType_1.cpp:11468 [kernel] AutocastCPU: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:462 [backend fallback] Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback] Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1059 [backend fallback] VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] Functionalize: registered at ../aten/src/ATen/FunctionalizeFallbackKernel.cpp:65 [backend fallback] PythonTLSSnapshot: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:117 [backend fallback] ``` This feature request is to add the ability to arbitrarly nest `NestedTensor`s. My usecase for multiple nested tensors is to have the ability to load data that doesn't have the same shape. Most manipulations of the values can be done on the unbound (not nested) tensors. Having a sinlge return value for labels with different shapes in a data loader can drastically simplify my pipeline. Where as most data pipelines have nested tuples of values that all have to conform to the same shape. ### Alternatives _No response_ ### Additional context _No response_ cc @cpuhrsch
5
5,955
75,936
AllGather with backward support async_op=True
oncall: distributed
### 🚀 The feature, motivation and pitch I has one project which used all gather contrastive loss, but _AllGather[https://github.com/pytorch/pytorch/blob/master/torch/distributed/nn/functional.py#L276] does not support async_op, so it cannot overlap computation and communication. ### Alternatives _No response_ ### Additional context _No response_ cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
0
5,956
75,935
torch.jit.trace error when custom autograd function used in the model
oncall: jit
### 🐛 Describe the bug custom autograd function defined in the model, jit.trace reports error ``` class GradientMultiplyLayer(torch.autograd.Function): @staticmethod def forward(ctx, input, mask_bw): ctx.save_for_backward(mask_bw) return input @staticmethod def backward(ctx, grad_output): mask_bw, = ctx.saved_tensors return grad_output.mul(mask_bw), None ... traced_script_module = torch.jit.trace(model, example) traced_script_module.save("model_for_libtorch.pt") Could not export Python function call 'GradientMultiplyLayer'. Remove calls to Python functions before export. Did you forget to add @script or @script_method annotation? If this is a nn.ModuleList, , add it to __constants__: ``` I can't solve this problem. thanks for your help ### Versions Collecting environment information... PyTorch version: 1.11.0+cu113 Is debug build: False *CUDA used to build PyTorch: 11.3 *ROCM used to build PyTorch: N/A OS: Microsoft Windows 10 GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-10-10.0.18362-SP0 Is CUDA available: True CUDA runtime version: 10.2.89 GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 Ti Nvidia driver version: 511.79 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.19.5 [pip3] torch==1.11.0+cu113 [pip3] torchaudio==0.11.0+cu113 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.12.0+cu113 [conda] Could not collect
3
5,957
75,926
Disable TracerWarnings on NVFuser opinfo tests
oncall: jit, module: bootcamp, triaged
### 🚀 The feature, motivation and pitch If you run the nvfuser opinfo tests, you'll see a lot of tracer warnings like this: ``` TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the da ta flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! ``` (you can repro with `$ PYTORCH_TEST_WITH_SLOW=1 gpui python test/test_jit_cuda_fuser.py -k test_nvfuser_correctness__masked_log_softmax_cuda_f loat64`, for example) We can disable these with the [NoWarn](https://github.com/pytorch/pytorch/blob/aa51ee2345022db1f318b4adf460abe461e38bc5/torch/csrc/jit/frontend/tracer.h#L179-L194) guard, but currently it doesn't have python bindings - so you'll have to add python bindings first. You can model this after InsertPointGuard which has both a [c++ implementation](https://github.com/pytorch/pytorch/blob/aa51ee2345022db1f318b4adf460abe461e38bc5/torch/csrc/jit/ir/ir.h#L1444-L1456) and a [python implementation](https://github.com/pytorch/pytorch/blob/aa51ee2345022db1f318b4adf460abe461e38bc5/torch/jit/_ir_utils.py#L4-L18) ### Alternatives _No response_ ### Additional context _No response_
0
5,958
75,925
autogen-58 microbenchmark fails on NNC gpu fusion
NNC
### 🐛 Describe the bug autogen-58 (from https://github.com/pytorch/benchmark/pull/801) fails on NNC. Repro is [here](https://gist.github.com/davidberard98/12fdd498e5284a543f2daa55addc8445). Output is: ``` $ gpui python 58-repro.py Traceback (most recent call last): File "/fsx/users/dberard/benchmark/58-repro.py", line 261, in <module> log_extract.run_nnc(ir, inputs, dynamic=False) File "/fsx/users/dberard/pytorch/torch/utils/jit/log_extract.py", line 105, in run_nnc return run_test(ir, inputs) File "/fsx/users/dberard/pytorch/torch/utils/jit/log_extract.py", line 75, in run_test graph(*inputs) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: CUDA driver error: too many resources requested for launch srun: error: dev-st-p4d24xlarge-1: task 0: Exited with exit code 1 ``` cc @ZolotukhinM @huiguoo ### Versions A100 (AWS cluster), master.
0
5,959
75,923
aten::_softmax.out doesn't work with non-contiguous Tensors
module: nn, triaged
### 🐛 Describe the bug Repro script: ```python import torch from torch.testing import make_tensor inp = torch.rand(2, 3) out = make_tensor(inp.shape, dtype=inp.dtype, device=inp.device, noncontiguous=True) print("BEFORE SOFTMAX: \n", out) torch._softmax(inp, 0, half_to_float=False, out=out) print("AFTER SOFTMAX: \n", out) out_real = torch._softmax(inp, 0, half_to_float=False) print("EXPECTED OUTPUT: \n", out_real) ``` outputs: ``` BEFORE SOFTMAX: tensor([[-0.1820, 4.4953, -8.6387], [-2.9375, 8.6783, 7.0183]]) AFTER SOFTMAX: tensor([[ 0.5257, 0.4881, 0.3627], [-2.9375, 8.6783, 7.0183]]) EXPECTED OUTPUT: tensor([[0.5257, 0.6373, 0.4881], [0.4743, 0.3627, 0.5119]]) ``` This is clearly wrong because the second row is not updated and there shouldn't be any negative numbers. cc: @gmagogsfm @zhxchen17 ### Versions PyTorch version: 1.12.0a0+gitfab098d Is debug build: True CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: CentOS Stream 8 (x86_64) GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-3) Clang version: Could not collect CMake version: version 3.19.6 Libc version: glibc-2.28 Python version: 3.8.10 (default, May 19 2021, 18:05:58) [GCC 7.3.0] (64-bit runtime) Python platform: Linux-5.6.13-0_fbk19_hardened_6064_gabfd136bb69a-x86_64-with-glibc2.10 Is CUDA available: False CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy==0.812 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.20.2 [pip3] torch==1.12.0a0+gitfab098d [conda] blas 1.0 mkl [conda] mkl 2021.2.0 h06a4308_296 [conda] mkl-include 2021.2.0 h06a4308_296 [conda] mkl-service 2.3.0 py38h27cfd23_1 [conda] mkl_fft 1.3.0 py38h42c9631_2 [conda] mkl_random 1.2.1 py38ha9443f7_2 [conda] mypy 0.812 pyhd3eb1b0_0 [conda] mypy_extensions 0.4.3 py38_0 [conda] numpy 1.20.2 py38h2d18471_0 [conda] numpy-base 1.20.2 py38hfae3a4d_0 [conda] torch 1.10.0a0+gitacc9f9a pypi_0 pypi cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
1
5,960
75,912
interaction with psychopy during imports, script exits with: free(): invalid pointer. Aborted (core dumped)
triaged
### 🐛 Describe the bug I'm using `xvfb-run` to run code that uses pytorch and psychopy (which requires an attached display). When I run script with no errors that imports `torch` and `psychopy`, the script will run until the end, and then exit with an error if psychopy was imported first. --- This version exits with an error: ```python from psychopy import visual import torch import time print("begin") time.sleep(1) print("end") ``` ```shell $ xvfb-run python scrap.py begin end free(): invalid pointer Aborted (core dumped) ``` --- This version works fine: ```python import torch from psychopy import visual import time print("begin") time.sleep(1) print("end") ``` ```shell $ xvfb-run python script.py begin end ``` --- To reproduce easily, you can setup a fresh virtualenv and install the following frozen requirements: ```shell python3 -m venv tmp_venv source tmp_venv/bin/activate pip install -r frozen_requirements.txt --no-deps ``` Here's the necessary contents for `frozen_requirements.txt`: ```shell arabic-reshaper==2.1.3 astunparse==1.6.3 attrs==21.4.0 cachetools==5.0.0 certifi==2021.10.8 cffi==1.15.0 charset-normalizer==2.0.12 cryptography==36.0.2 cycler==0.11.0 decorator==4.4.2 esprima==4.0.1 et-xmlfile==1.1.0 fonttools==4.32.0 freetype-py==2.2.0 future==0.18.2 gevent==21.12.0 gitdb==4.0.9 GitPython==3.1.27 glfw==2.5.3 google-api-core==2.7.2 google-auth==2.6.5 google-cloud==0.34.0 google-cloud-speech==2.13.1 googleapis-common-protos==1.56.0 greenlet==1.1.2 idna==3.3 imageio==2.16.2 imageio-ffmpeg==0.4.7 jedi==0.18.1 json-tricks==3.15.5 kiwisolver==1.4.2 markdown-it-py==2.0.1 matplotlib==3.5.1 mdurl==0.1.1 moviepy==1.0.3 msgpack==1.0.3 msgpack-numpy==0.4.7.1 numpy==1.22.3 opencv-python==4.5.5.64 openpyxl==3.0.9 packaging==21.3 pandas==1.4.2 parso==0.8.3 Pillow==9.1.0 proglog==0.1.9 proto-plus==1.20.3 protobuf==3.20.0 PsychoPy==2022.1.2 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.21 pyglet==1.5.23 pyparsing==3.0.8 python-bidi==0.4.2 python-dateutil==2.8.2 pytz==2022.1 PyYAML==6.0 requests==2.27.1 rsa==4.8 scipy==1.8.0 six==1.16.0 smmap==5.0.0 SpeechRecognition==3.8.1 torch==1.11.0 tqdm==4.64.0 typing-extensions==4.1.1 urllib3==1.26.9 wxPython==4.1.1 zope.event==4.5.0 zope.interface==5.4.0 ``` Please let me know if I can provide more useful info about this. Thanks! ### Versions Collecting environment information... PyTorch version: 1.11.0+cu102 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.1 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.4.0-100-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA TITAN RTX GPU 1: NVIDIA TITAN RTX Nvidia driver version: 465.19.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] msgpack-numpy==0.4.7.1 [pip3] numpy==1.22.3 [pip3] torch==1.11.0 [conda] Could not collect
6
5,961
75,911
'python setup.py build' failed but succeed using 'pip install -v .' which calls 'python setup.py build'.
module: build, triaged
## Issue description 'python setup.py build' failed but succeed using 'pip install -v .' which calls 'python setup.py build'. Error message: subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '6']' returned non-zero exit status 2. ## Log # TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1 7.0+PTX 8.0" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" python setup.py build Building wheel torch-1.7.0a0 -- Building version 1.7.0a0 cmake --build . --target install --config Release -- -j 6 make: *** No rule to make target 'install'. Stop. Traceback (most recent call last): File "setup.py", line 717, in <module> build_deps() File "setup.py", line 308, in build_deps build_caffe2(version=version, File "/pytorch-builder/src/pytorch/tools/build_pytorch_libs.py", line 62, in build_caffe2 cmake.build(my_env) File "/pytorch-builder/src/pytorch/tools/setup_helpers/cmake.py", line 345, in build self.run(build_args, my_env) File "/pytorch-builder/src/pytorch/tools/setup_helpers/cmake.py", line 141, in run check_call(command, cwd=self.build_dir, env=env) File "/opt/conda/lib/python3.8/subprocess.py", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '6']' returned non-zero exit status 2. ## System Info # python ./torch/utils/collect_env.py Collecting environment information... PyTorch version: 1.7.0a0+bd37e59 Is debug build: True CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: version 3.10.2 Python version: 3.8 (64-bit runtime) Is CUDA available: False CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2070 SUPER Nvidia driver version: 470.86 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.21.5 [pip3] torch==1.7.0a0 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] numpy 1.21.5 py38he7a7128_1 [conda] numpy-base 1.21.5 py38hf524024_1 [conda] torch 1.7.0a0 pypi_0 pypi cc @malfet @seemethere
1
5,962
75,910
[FSDP] Verify buffer checkpointing
high priority, triage review, oncall: distributed, module: fsdp
### 🚀 The feature, motivation and pitch `test_fsdp_state_dict` currently does not contain any logic to test that buffers are checkpointed appropriately. We should add these tests and also verify that it works for rank0 and offload_to_cpu checkpoint which is being added in https://github.com/pytorch/pytorch/pull/75908 ### Alternatives _No response_ ### Additional context _No response_ cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
0
5,963
75,909
Add batching rules for `{view}_copy` operators
triaged, module: batching
The plan for the `functionalize()` transform is that it will soon have two modes: - `functionalize(remove='mutations')` will remove all inplace and out= ops from the function - `functionalize(remove='mutations_and_views')` will *additionally* convert view operators into their view_copy variants. You can't use that second version of `functionalize()` together with `vmap` today though, because none of the `view_copy` operators have batching rules. Most of them can use the fallback batching rule, but some aren't able to because of their schemas, like `at::split_copy`. So maybe we should just ensure that the "problematic" view copy ops have batching rules first.
0
5,964
75,904
Move _SKIP_PYTHON_BINDINGS to native_functions.yaml
triaged, module: codegen
### 🐛 Describe the bug @swolchok ran into a case where he named a function with `_forward` suffix and that caused it to magically not show up in Python bindings, in a way that was not debuggable until I remembered that there are name based matches. We should move this into a field on `native_functions.yaml` to make it more obvious when python bindings are being suppressed. ### Versions master cc @ezyang @bhosmer @bdhirsh
0
5,965
75,903
torch.jit.script'd function very slow on first invocation on latest nightly
oncall: jit, NNC, module: nvfuser
### 🐛 Describe the bug It takes about a minute to run this function for the first time. It takes only a second if it's running on a version of PyTorch built from source. To reproduce run the code in [this gist](https://gist.github.com/Linux-cpp-lisp/9ffccb39f5f0f192a0c07eb6645d32d3). All credit to finding this to @Linux-cpp-lisp. I suspect this is an environment issue, i.e. an old version of that we ship as a nightly vs. a newer version I'm using locally. Clearly this prevents the optimization from being useful. ### Versions The nightly in question here is 1.12.0.dev20220415-py3.9_cuda11.3_cudnn8.3.2_0 My local environment is ``` Collecting environment information... PyTorch version: 1.12.0a0+git075974e Is debug build: False CUDA used to build PyTorch: 11.1 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 10.0.1 CMake version: version 3.22.3 Libc version: glibc-2.27 Python version: 3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.4.0-1051-aws-x86_64-with-glibc2.27 Is CUDA available: True CUDA runtime version: 11.1.105 GPU models and configuration: GPU 0: A100-SXM4-40GB Nvidia driver version: 450.119.03 cuDNN version: Probably one of the following: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7.6.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.0/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.0.5 /usr/local/cuda-11.1/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.0.5 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy==0.812 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.20.3 [pip3] torch==1.12.0a0+git075974e [pip3] torch2trt==0.3.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-include 2021.2.0 h06a4308_296 [conda] mkl-random 1.2.1 pypi_0 pypi [conda] mkl-service 2.3.0 pypi_0 pypi [conda] mkl_fft 1.3.0 py39h42c9631_2 [conda] mkl_random 1.2.2 py39h51133e4_0 [conda] mypy 0.812 pyhd8ed1ab_0 conda-forge [conda] mypy_extensions 0.4.3 py39h06a4308_0 [conda] numpy 1.20.3 pypi_0 pypi [conda] numpy-base 1.20.2 py39hfae3a4d_0 [conda] torch 1.11.0 pypi_0 pypi ``` One notable difference is CUDA 11.1 locally vs 11.3 in the nightlies (note that the gist doesn't use CUDA).
17
5,966
75,895
add -D_GLIBCXX_ASSERTIONS in debug mode
module: build, triaged
### 🚀 The feature, motivation and pitch Internally, dev builds are built with -D_GLIBCXX_ASSERTIONS, which enables checks like index out-of-bounds in vectors. See https://github.com/pytorch/pytorch/pull/75766 - which was built with _GLIBCXX_ASSERTIONS and found one error. cc @malfet @seemethere @janeyx99 ### Alternatives _No response_ ### Additional context _No response_
0
5,967
75,864
INTERNAL ASSERT FAILED at "vulkan_rewrite.cpp":272
oncall: mobile
### 🐛 Describe the bug ``` from torch import nn import torch from torch.utils.mobile_optimizer import optimize_for_mobile model = nn.Sequential(nn.Conv2d(3, 3, kernel_size=1)) model = model.cpu() model.eval() example0 = torch.rand(1, 3, 4, 4) with torch.no_grad(): traced = torch.jit.trace(model, example0) print('torch version is', torch.__version__) optimized_traced = optimize_for_mobile(traced, backend='vulkan') optimized_traced._save_for_lite_interpreter("./traced_model_vulkan.ptl") ``` Message ``` torch version is 1.11.0 Traceback (most recent call last): File "/home/marat/OCR/yolo3/exmobile.py", line 15, in <module> optimized_traced = optimize_for_mobile(traced, backend='vulkan') File "/home/marat/anaconda3/envs/cexp/lib/python3.7/site-packages/torch/utils/mobile_optimizer.py", line 67, in optimize_for_mobile optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str) RuntimeError: falseINTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646755861072/work/torch/csrc/jit/passes/vulkan_rewrite.cpp":272, please report a bug to PyTorch. Mobile optimizaiton only available with Vulkan at the moment. Vulkan is not enabled. Please build with USE_VULKAN=1 ``` ### Versions torch 1.11.0 ubuntu 18.04 installed by conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
0
5,968
75,862
LayerNorm and GroupNorm with num_groups=1 not equivalent
module: nn, triaged, module: norms and normalization
### 🐛 Describe the bug `LayerNorm` and `GroupNorm` with `num_groups=1` should be equivalent but they are not ```python from torch import nn import torch import random random.seed(0) torch.manual_seed(0) x = torch.randn((1, 8, 2, 2)) x_norm = x.permute(0, 2, 3, 1) x_norm = nn.LayerNorm(8, eps=1e-6)(x_norm) x_norm.permute(0, 3, 1, 2) print(x_norm[0,:2,:2,:2]) ``` ``` tensor([[[-1.4655, 0.7272], [-0.6575, 0.8390]], [[-0.8825, -1.0009], [-0.3389, -1.8741]]], grad_fn=<SliceBackward0>) ``` ```python x_norm = nn.GroupNorm(1, 8, eps=1e-6)(x) print(x_norm[0,:2,:2,:2]) ``` ``` tensor([[[-1.1255, -1.1518], [-0.2556, -0.4378]], [[ 0.8369, 0.6812], [-0.3206, -2.1088]]], grad_fn=<SliceBackward0>) ```` ### Versions ``` Collecting environment information... PyTorch version: 1.11.0+cu102 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1080 Ti Nvidia driver version: 510.54 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.22.3 [pip3] pytorch-lightning==1.5.10 [pip3] pytorch-pretrained-biggan==0.1.1 [pip3] torch==1.11.0 [pip3] torchaudio==0.11.0 [pip3] torchcam==0.3.1 [pip3] torchinfo==1.6.5 [pip3] torchmetrics==0.7.3 [pip3] torchvision==0.12.0 [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py39h7f8727e_0 [conda] mkl_fft 1.3.1 py39hd3c417c_0 [conda] mkl_random 1.2.2 py39h51133e4_0 [conda] mypy-extensions 0.4.3 pypi_0 pypi [conda] numpy 1.22.3 pypi_0 pypi [conda] pytorch-lightning 1.5.10 pypi_0 pypi [conda] pytorch-mutex 1.0 cuda pytorch [conda] pytorch-pretrained-biggan 0.1.1 pypi_0 pypi [conda] torch 1.11.0 pypi_0 pypi [conda] torchaudio 0.11.0 pypi_0 pypi [conda] torchcam 0.3.1 pypi_0 pypi [conda] torchinfo 1.6.5 pypi_0 pypi [conda] torchmetrics 0.7.3 pypi_0 pypi [conda] torchvision 0.12.0 pypi_0 pypi ``` cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
6
5,969
75,798
Fix workaround `__module__` used to appease public binding checks
triaged
This issues number is used to easily track quick fix for `__module__` for submodules that needs a broader fix. You should reference to it via `TODO: Fix via https://github.com/pytorch/pytorch/issues/75798` next to the workaround to make sure they remain easily searchable.
0
5,970
75,794
Different result with JIT
oncall: jit
### 🐛 Describe the bug ```python import torch # Outputs 1,1,1 if the following line is commented-out, and 1,1,2 if it is not #@torch.jit.script def test(): a = torch.zeros(1) b = torch.zeros(1) c = torch.zeros(1) n = 3 x = torch.ones(n) for i in range(n): a, b, c = c, a, b a[0] += x[i] print(a[0].item()) test() ``` ### Versions I ran with PyTorch 1.11.0 on a CPU. ``` PyTorch version: 1.11.0+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 Libc version: glibc-2.10 Python version: 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] (64-bit runtime) Python platform: Linux-5.13.0-39-generic-x86_64-with-debian-bullseye-sid Is CUDA available: False CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] botorch==0.1.0 [pip3] gpytorch==0.3.2 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.20.3 [pip3] numpydoc==0.9.2 [pip3] pytorch-lightning==0.9.0rc6 [pip3] torch==1.11.0+cpu [pip3] torchaudio==0.11.0+cpu [pip3] torchvision==0.12.0+cpu [conda] blas 1.0 mkl [conda] botorch 0.1.0 pypi_0 pypi [conda] cpuonly 2.0 0 pytorch [conda] gpytorch 0.3.2 pypi_0 pypi [conda] mkl 2020.1 217 [conda] mkl-include 2019.3 199 anaconda [conda] mkl-service 2.3.0 py37he904b0f_0 [conda] mkl_fft 1.0.15 py37ha843d7b_0 [conda] mkl_random 1.1.1 py37h0573a6f_0 [conda] mypy-extensions 0.4.3 pypi_0 pypi [conda] numpy 1.20.3 pypi_0 pypiz [conda] numpy-base 1.18.1 py37hde5b4d6_1 [conda] numpydoc 0.9.2 py_0 [conda] pytorch-lightning 0.9.0rc6 pypi_0 pypi [conda] pytorch-mutex 1.0 cpu pytorch [conda] torch 1.10.0+cpu pypi_0 pypi [conda] torchaudio 0.11.0+cpu pypi_0 pypi [conda] torchvision 0.12.0+cpu pypi_0 pypi ```
2
5,971
75,788
`torch.jit.script` Script functions do return `requires_grad = False` if `torch.no_grad()` has been used
oncall: jit
### 🐛 Describe the bug Hello, I think I found a bug with `torch.jit.script` and `torch.no_grad()`. Here is a code sample to reproduce behavior: ``` import torch def bar(x): with torch.no_grad(): y = x * 2 z = 2 * x + y print("requires_grad: ", x.requires_grad, y.requires_grad, z.requires_grad) return z scripted_bar = torch.jit.script(bar) a = torch.rand(3, requires_grad=True) b = torch.rand(3, requires_grad=True) print("scripted_bar(a) : ", scripted_bar, scripted_bar(a).requires_grad) print("bar(b) : ", bar, bar(b).requires_grad) ``` Output: ``` requires_grad: True False False scripted_bar(a) : <torch.jit.ScriptFunction object at 0x7fd538ac63b0> False requires_grad: True False True bar(b) : <function bar at 0x7fd5ada901f0> True ``` I would expect the scripted and non-scripted function to both return tensors with `requires_grad = True`. ### Versions Collecting environment information... PyTorch version: 1.11.0+cu102 Is debug build: False CUDA used to build PyTorch: 10.2 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: Quadro T1000 Nvidia driver version: 510.54 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.22.3 [pip3] torch==1.11.0 [pip3] torchvision==0.12.0 [conda] mypy-extensions 0.4.3 pypi_0 pypi [conda] numpy 1.22.3 pypi_0 pypi [conda] torch 1.11.0 pypi_0 pypi [conda] torchvision 0.12.0 pypi_0 pypi
1
5,972
75,785
Expected quantizer->qscheme() == kPerTensorAffine to be true, but got false.
oncall: quantization, low priority, triaged
### 🐛 Describe the bug When finished quantized aware training, I got a quantized model after torch.quantization.convert(). But when I convert pytorch model, I happened to meet the problem: Expected quantizer->qscheme() == kPerTensorAffine to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) this error happened in the function _optimize_graph() of torch.onnx.export() API when passed torch._C._jit_pass_onnx_unpack_quantized_weights() function. It seems that the pytorch only support quantizer->qscheme() == kPerTensorAffine. But in my model, I used kPerChannelAffine to quantized the weights. Could you give me some advice to fix this problem? here is the source code: class qatmodel(torch.nn.Module): def __init__(self): super().__init__() self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() self.conv1 = torch.nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=True) self.bn1 = torch.nn.BatchNorm2d(64) self.relu1 = torch.nn.ReLU(inplace=True) def forward(self, x): x = self.quant(x) x = self.conv1(x) x = self.bn1(x) x = self.relu1(x) x = self.dequant(x) return x my_model = qatmodel() torch.quantization.fuse_modules(my_model, ["conv1", "bn1", "relu1"], inplace=True) BACKEND = "fbgemm" my_model.train() my_model.qconfig = torch.quantization.get_default_qat_qconfig(BACKEND ) my_model = torch.quantization.prepare_qat(my_model) my_model.eval() torch.backends.quantized.engine = BACKEND model_int8 = torch.quantization.convert(my_model) fp32_input = torch.randn(4, 3, 3, 3) dynamic_axes = { "data": { 0: '?', 1: '?', 2: '?', 3: '?' }, } model_scripted = torch.jit.trace(model_int8, fp32_input) torch.onnx.export( model_scripted, fp32_input, "verbose.onnx", export_params=True, verbose=True, input_names=["data"], output_names=["output"], opset_version=13, do_constant_folding=True, keep_initializers_as_inputs=True) ### Versions PyTorch version: 1.12.0.dev20220228+cu111 Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 11.0.0 (https://github.com/llvm/llvm-project.git 176249bd6732a8044d457092ed932768724a6f06) CMake version: version 3.18.4 Libc version: glibc-2.15 Python version: 2.7.17 (default, Sep 30 2020, 13:38:04) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.4.0-1055-azure-x86_64-with-Ubuntu-18.04-bionic Is CUDA available: N/A CUDA runtime version: 11.1.105 GPU models and configuration: GPU 0: Tesla K80 Nvidia driver version: 440.64.00 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel
2
5,973
75,778
`torch.matmul` produces wrong results on A4000 for matrices (n*m) with large m and small n
high priority, triaged, module: tf32
### 🐛 Describe the bug ### Description `torch.matmul` produces wrong result for matrices (n*m) with large m and small n, but not if you transpose it. This bug is only seen on my A4000 devices but not on 2070 devices. ### Code to reproduce ``` import torch as th import numpy as np num_trials = 100 num_failures_cpu = 0 num_failures_gpu = 0 num_failures_cpu_transposed = 0 num_failures_gpu_transposed = 0 stTime = time.time() for _ in range(num_trials): M = th.rand(4, 4) * 100 X = (th.rand(1000000, 4) - 0.5) * 100 res1 = X @ M res2 = X.cuda() @ M.cuda() res3 = X.numpy() @ M.numpy() res1_T = M @ X.T res2_T = M.cuda() @ X.T.cuda() res3_T = M.numpy() @ X.T.numpy() try: np.testing.assert_allclose(res1.cpu().numpy(), res3, atol=1e-1) except: num_failures_cpu += 1 try: np.testing.assert_allclose(res2.cpu().numpy(), res3, atol=1e-1) except: num_failures_gpu += 1 try: np.testing.assert_allclose(res1_T.cpu().numpy(), res3_T, atol=1e-1) except: num_failures_cpu_transposed += 1 try: np.testing.assert_allclose(res2_T.cpu().numpy(), res3_T, atol=1e-1) except: num_failures_gpu_transposed += 1 print(f""" CPU failures: {num_failures_cpu / num_trials} GPU failures: {num_failures_gpu / num_trials} CPU failures (transposed): {num_failures_cpu_transposed / num_trials} GPU failures (transposed): {num_failures_gpu_transposed / num_trials} Total time: {time.time() - stTime}s. """) ``` ### Results On GTX 2070 ``` CPU failures: 0.0 GPU failures: 0.0 CPU failures (transposed): 0.0 GPU failures (transposed): 0.0 ``` On A4000 ``` CPU failures: 0.0 GPU failures: 0.0 CPU failures (transposed): 0.0 GPU failures (transposed): 1.0 ```` ### Versions pytorch 1.10.2 + cuda 11.3 cc @ezyang @gchanan @zou3519 @zasdfgbnm @ptrblck @ngimel
5
5,974
75,773
Handle noncontiguous inputs in distributed backend layer
oncall: distributed, triaged, module: c10d
### 🚀 The feature, motivation and pitch As inspired by issue "torch.distributed.nn.functional.all_gather: Tensors must be contiguous" #73515 I had quick fix to force the tensors to be contiguous at the distributed.nn.functional layer, see https://github.com/pytorch/pytorch/pull/75276 However, I think the proper fix should be implemented in the layer of distributed backend, e.g. torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp, to support tensors that fails "is_non_overlapping_and_dense()" check. Such materialization function can be shared by all NCCL collectives, that requires dense non-overlapping buffer. I think there are 2 reasons that it's preferred to be done at the lower layer: 1. tensor.is_non_overlapping_and_dense() is not exposed in python API, and only the tensor that failed this check should be materialized. Calling the .contiguous() universally at the functional layer is an overkill. 2. All the layers that are built on top of distributed backend layer will not need to worry about noncontiguous inputs. ### Alternatives Do it in the distributed.nn.functional layer, as https://github.com/pytorch/pytorch/pull/75276 ### Additional context _No response_ cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
1
5,975
75,750
[Autograd] Queued Callback Does Not Propagate Error
module: autograd, triaged, actionable
When queueing a callback using `torch.autograd.Variable._execution_engine.queue_callback()` for a GPU tensor, a raised error in the callback is not propagated. Instead, we see a message like `SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7fa8f8172850> returned NULL without setting an error`. <details> <summary>Reproducer</summary> ```python import torch def callback(): print("Callback! Raising an error...") raise RuntimeError("Error from callback!") def hook_with_callback(*args): print("Backward hook!") torch.autograd.Variable._execution_engine.queue_callback(callback) t = torch.tensor([1., 2.], requires_grad=True, device=torch.device("cuda")) t.register_hook(hook_with_callback) output = t ** 2 loss = output.sum() loss.backward() ``` </details> <details> <summary>Output</summary> ```shell Backward hook! Callback! Raising an error... Traceback (most recent call last): File "repro.py", line 15, in <module> loss.backward() File "/fsx/users/andgu/work/pytorch/torch/_tensor.py", line 396, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/fsx/users/andgu/work/pytorch/torch/autograd/__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7fa8f8172850> returned NULL without setting an error ``` </details> If the tensor `t` is on CPU instead, then the error is correctly propagated. </details> <details> <summary>Output when tensor is instead on CPU</summary> ```shell Backward hook! Callback! Raising an error... Traceback (most recent call last): File "repro.py", line 15, in <module> loss.backward() File "/fsx/users/andgu/work/pytorch/torch/_tensor.py", line 396, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/fsx/users/andgu/work/pytorch/torch/autograd/__init__.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "repro.py", line 5, in callback raise RuntimeError("Error from callback!") RuntimeError: Error from callback! ``` </details> I am not sure if there are constraints regarding the tensor being on GPU that make this difficult. Both PyTorch FSDP and Fairscale FSDP have a `p_assert()` method (https://github.com/pytorch/pytorch/blob/d4cce30573651b5683d7e932b88194425db23758/torch/distributed/fsdp/fully_sharded_data_parallel.py#L2872 and [here in Fairscale](https://github.com/facebookresearch/fairscale/blob/72f373c1ac00e6a0506fdecbeed5bb1857e98ce7/fairscale/nn/data_parallel/fully_sharded_data_parallel.py#L2422)) to ensure the error message is printed before erroring. However, this has previously been used for internal assertions only. The downside of using `p_assert()` (or something similar that prints before erroring) is that the cryptic message `SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7fa8f8172850> returned NULL without setting an error` is still written to console, which may confuse users. cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
3
5,976
75,747
Depthwise Conv1d performance (a naive CUDA kernel is 10x faster)
module: cuda, triaged
### 🚀 The feature, motivation and pitch Please improve the CUDA performance of Depthwise Conv1d :) FYI, I write a naive CUDA kernel and it's already 10x faster than pytorch: https://github.com/BlinkDL/RWKV-CUDA RTX3090: pytorch = fwd 14ms bwd 65ms CUDA kernel v3 = fwd 0.8ms bwd 5.5ms ### Alternatives _No response_ ### Additional context _No response_ cc @ngimel
4
5,977
75,740
Large numerical error when applying nn.Linear in RTX A6000 with cuda>=11.1
high priority, module: cuda, triaged, module: tf32
### 🐛 Describe the bug # Applying a simple Linear layer (Y = aX) is generating incorrect output with A6000 GPUs using cuda >=11.1 ## Report: 1.In the code snippet, we have a simple linear layer (without bias). First we use nn.Linear to compute Y=aX. Next we explicitly multiply the two tensors, We expect the norm of the difference to be 0 in all cases (can be confirmed by uncommenting the line setting device to "cpu"). However a significant error is seen with A6000 GPUs using cuda >= 11.1 (Please see output table). 2. No error observed for CPU, and other GPUs at our disposal (NVIDIA RTX, TITAN X ) 3. With A6000, **no error for cuda 11.0**. 4. With A6000, **error confirmed for cuda 11.1 and cuda 11.3**. 5. This error is deadly serious since it is an insidious corruption, without any obvious outward symptoms. In fact, if we uncomment the other line of code, it can be seen that a significant error is introduced even when multiplying with 1. ## Code Snippet: ```python import torch d = "cuda:0" #d = "cpu" x = [pow(10,x) for x in list(range(10))] for size in x: data = torch.rand(size,1) lin = torch.nn.Linear(1,1,bias=False).to(d) #lin.weight.data = torch.tensor([[1.0]]) print (size,torch.norm(data.to(d)*lin.weight.to(d) - lin.to(d)(data.to(d))).item()) ``` ## Output | size | Error Norm | | ------------------ | ---------------------------------------| | 1 | 5.2422285079956055e-05 | | 10 | 0.0003333788481540978 | | 100 | 0.0008036142098717391 | | 1000 | 2.7209020117879845e-05 | | 10000 | 0.0 | | 100000 | 0.005546107422560453 | | 1000000 | 0.13998229801654816 | | 10000000 | 0.05942477285861969 | | 100000000 | 1.474966287612915 | | 1000000000 | 2.536698341369629 | ### Versions Collecting environment information... PyTorch version: 1.10.1+cu113 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.4.0-94-generic-x86_64-with-glibc2.29 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA RTX A6000 GPU 1: NVIDIA RTX A6000 GPU 2: NVIDIA RTX A6000 GPU 3: NVIDIA RTX A6000 Nvidia driver version: 470.86 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.22.0 [pip3] torch==1.10.1+cu113 [pip3] torch-cluster==1.5.9 [pip3] torch-geometric==2.0.3 [pip3] torch-scatter==2.0.9 [pip3] torch-sparse==0.6.12 [pip3] torch-spline-conv==1.2.1 [pip3] torchaudio==0.10.1+cu113 [pip3] torchvision==0.11.2+cu113 [conda] blas 1.0 mkl [conda] mkl 2021.2.0 h06a4308_296 [conda] mkl-service 2.3.0 py38h27cfd23_1 [conda] mkl_fft 1.3.0 py38h42c9631_2 [conda] mkl_random 1.2.1 py38ha9443f7_2 [conda] mypy_extensions 0.4.3 py38_0 [conda] numpy 1.20.1 py38h93e21f0_0 [conda] numpy-base 1.20.1 py38h7d8b39e_0 [conda] numpydoc 1.1.0 pyhd3eb1b0_1 cc @ezyang @gchanan @zou3519 @ngimel @zasdfgbnm @ptrblck
3
5,978
75,737
torch.device missing doctring
module: docs, triaged
There is no docstring for `torch.device` for v. 1.12.0.dev20220224. ```python # Running on JupyterLab import torch torch.device(<shift + tab>) # A pop-up shows the following Init signature: torch.device(self, /, *args, **kwargs) Docstring: <no docstring> File: /usr/local/Caskroom/miniconda/base/envs/book/lib/python3.10/site-packages/torch/__init__.py Type: type Subclasses: ``` <details> <summary>Versions</summary> ``` PyTorch version: 1.12.0.dev20220224 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 12.3.1 (x86_64) GCC version: Could not collect Clang version: 11.0.0 (clang-1100.0.33.12) CMake version: version 3.23.0 Libc version: N/A Python version: 3.10.0 (default, Nov 10 2021, 11:24:47) [Clang 12.0.0 ] (64-bit runtime) Python platform: macOS-10.16-x86_64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.2 [pip3] torch==1.12.0.dev20220224 [conda] blas 1.0 mkl [conda] mkl 2021.4.0 hecd8cb5_637 [conda] mkl-service 2.4.0 py310hca72f7f_0 [conda] mkl_fft 1.3.1 py310hf879493_0 [conda] mkl_random 1.2.2 py310hc081a56_0 [conda] numpy 1.21.2 py310h2cbf25c_0 [conda] numpy-base 1.21.2 py310he490955_0 [conda] pytorch 1.12.0.dev20220224 py3.10_0 pytorch-nightly ``` </details> cc @brianjo @mruberry
0
5,979
75,733
`torch.sum, prod, cumsum, cumprod, sparse.sum` INTERNAL ASSERT FAIL
module: error checking, triaged, module: reductions
### 🐛 Describe the bug ```python import torch a = torch.rand([], requires_grad=True) torch.sum(a, dtype=torch.bool) ``` When the tensor requiring grad and arg `dtype` is `torch.bool`, `torch.sum` will trigger internal assert failed. Besides, `torch.prod`, `cumsum`, `cumprod` and `sparse.sum` also trigger such internal assert fail ```python import torch a = torch.rand([], requires_grad=True) torch.prod(a, dtype=torch.bool) ``` ```python import torch a = torch.rand([1], requires_grad=True) torch.cumsum(a, 0, dtype=torch.int8) ``` ```python import torch a = torch.rand([1], requires_grad=True) torch.cumprod(a, 0, dtype=torch.int8) ``` ```python import torch a = torch.rand([1], requires_grad=True).to_sparse() torch.sparse.sum(a, dtype=torch.int8) ``` ### Versions 1.11.0
1
5,980
75,725
Warning originating in C10 backend does not get translated to Python warning if run from subprocess
high priority, triage review, oncall: distributed
### 🐛 Describe the bug Hi, I want to record a warning in Python, that is originating in C10 portion of the code (`TORCH_WARN_ONCE`), while running in a subprocess because of DDP. However, it seems that this warning is impossible to catch because it does not propagate to Python correctly. Below is a simple demo, that is mostly taken from [this tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) and adapted to catching warnings. <details> <summary>Code and output with warnings</summary> ```python import contextlib import io import os import sys import warnings import torch import torch.distributed as dist import torch.multiprocessing as mp from torch.nn.parallel import DistributedDataParallel as DDP import traceback def setup(rank, world_size): os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "12355" # initialize the process group dist.init_process_group("gloo", rank=rank, world_size=world_size) def cleanup(): dist.destroy_process_group() class ToyModel(torch.nn.Module): def __init__(self): super(ToyModel, self).__init__() self.net1 = torch.nn.Linear(10, 10) self.relu = torch.nn.ReLU() self.net2 = torch.nn.Linear(10, 5) def forward(self, x): return self.net2(self.relu(self.net1(x))) def demo_basic(rank, world_size): new_stdout = io.StringIO() new_stderr = io.StringIO() with contextlib.ExitStack() as stack: warns = stack.enter_context(warnings.catch_warnings(record=True)) stack.enter_context(contextlib.redirect_stdout(new_stdout)) stack.enter_context(contextlib.redirect_stderr(new_stderr)) warnings.simplefilter("always") warnings.warn("Simple warning", Warning) print(f"Running basic DDP example on rank {rank}.") setup(rank, world_size) # create model and move it to GPU with id rank model = ToyModel().to(rank) ddp_model = DDP(model, device_ids=[rank], find_unused_parameters=True) loss_fn = torch.nn.MSELoss() optimizer = torch.optim.SGD(ddp_model.parameters(), lr=0.001) optimizer.zero_grad() try: outputs = ddp_model(torch.randn(20, 10)) labels = torch.randn(20, 5).to(rank) loss_fn(outputs, labels).backward() optimizer.step() except: print(traceback.format_exc(), file=sys.stderr) finally: cleanup() print(f"Caught warnings:") for warn in warns: print(warn) print(f"Caught stdout: {new_stdout.getvalue()}") print(f"Caught stderr: {new_stderr.getvalue()}") def run_demo(demo_fn, world_size): mp.spawn(demo_fn, args=(world_size,), nprocs=world_size, join=True) if __name__ == "__main__": n_gpus = torch.cuda.device_count() world_size = n_gpus run_demo(demo_basic, world_size) ``` Output: ``` [W reducer.cpp:1289] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) Caught warnings: {message : Warning('Simple warning'), category : 'Warning', filename : '/home/otaj/files/grid/simple-demo/main.py', lineno : 46, line : None} Caught stdout: Running basic DDP example on rank 0. Caught stderr: ``` </details> However, if I do some intentional mistake in order to raise an Exception in the similar code path (such as changing the size of tensors so that they do not match anymore), the Exception is correctly propagated to to Python as a `RuntimeError`, see the modified code <details> <summary>Code and output with Exception</summary> ```python import contextlib import io import os import sys import warnings import torch import torch.distributed as dist import torch.multiprocessing as mp from torch.nn.parallel import DistributedDataParallel as DDP import traceback def setup(rank, world_size): os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = "12355" # initialize the process group dist.init_process_group("gloo", rank=rank, world_size=world_size) def cleanup(): dist.destroy_process_group() class ToyModel(torch.nn.Module): def __init__(self): super(ToyModel, self).__init__() self.net1 = torch.nn.Linear(10, 10) self.relu = torch.nn.ReLU() self.net2 = torch.nn.Linear(10, 5) def forward(self, x): return self.net2(self.relu(self.net1(x))) def demo_basic(rank, world_size): new_stdout = io.StringIO() new_stderr = io.StringIO() with contextlib.ExitStack() as stack: warns = stack.enter_context(warnings.catch_warnings(record=True)) stack.enter_context(contextlib.redirect_stdout(new_stdout)) stack.enter_context(contextlib.redirect_stderr(new_stderr)) warnings.simplefilter("always") warnings.warn("Simple warning", Warning) print(f"Running basic DDP example on rank {rank}.") setup(rank, world_size) # create model and move it to GPU with id rank model = ToyModel().to(rank) ddp_model = DDP(model, device_ids=[rank], find_unused_parameters=True) loss_fn = torch.nn.MSELoss() optimizer = torch.optim.SGD(ddp_model.parameters(), lr=0.001) optimizer.zero_grad() try: outputs = ddp_model(torch.randn(20, 9)) # <--- Change is here, this will create error labels = torch.randn(20, 5).to(rank) loss_fn(outputs, labels).backward() optimizer.step() except: print(traceback.format_exc(), file=sys.stderr) finally: cleanup() print(f"Caught warnings:") for warn in warns: print(warn) print(f"Caught stdout: {new_stdout.getvalue()}") print(f"Caught stderr: {new_stderr.getvalue()}") def run_demo(demo_fn, world_size): mp.spawn(demo_fn, args=(world_size,), nprocs=world_size, join=True) if __name__ == "__main__": n_gpus = torch.cuda.device_count() world_size = n_gpus run_demo(demo_basic, world_size) ``` Output: ``` Caught warnings: {message : Warning('Simple warning'), category : 'Warning', filename : '/home/otaj/files/grid/simple-demo/main.py', lineno : 46, line : None} Caught stdout: Running basic DDP example on rank 0. Caught stderr: Traceback (most recent call last): File "/home/otaj/files/grid/simple-demo/main.py", line 60, in demo_basic outputs = ddp_model(torch.randn(20, 9)) File "/home/otaj/.pyenv/versions/pl-dev/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/otaj/.pyenv/versions/pl-dev/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 963, in forward output = self.module(*inputs[0], **kwargs[0]) File "/home/otaj/.pyenv/versions/pl-dev/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/otaj/files/grid/simple-demo/main.py", line 34, in forward return self.net2(self.relu(self.net1(x))) File "/home/otaj/.pyenv/versions/pl-dev/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/otaj/.pyenv/versions/pl-dev/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 103, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (20x9 and 10x10) ``` </details> The issue was first reported on [PyTorch slack](https://pytorch.slack.com/archives/C3PDTEV8E/p1649776584857029), cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @albanD , @ezyang , @mruberry , it is most likely linked to this issue: #72948 Thanks a lot! ### Versions Collecting environment information... PyTorch version: 1.11.0+cu113 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Arch Linux (x86_64) GCC version: (GCC) 11.2.0 Clang version: Could not collect CMake version: version 3.23.0 Libc version: glibc-2.35 Python version: 3.9.11 (main, Apr 7 2022, 15:33:34) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.17.1-zen1-1-zen-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 11.6.112 GPU models and configuration: GPU 0: NVIDIA T1200 Laptop GPU Nvidia driver version: 510.60.02 cuDNN version: Probably one of the following: /usr/lib/libcudnn.so.8.3.3 /usr/lib/libcudnn_adv_infer.so.8.3.3 /usr/lib/libcudnn_adv_train.so.8.3.3 /usr/lib/libcudnn_cnn_infer.so.8.3.3 /usr/lib/libcudnn_cnn_train.so.8.3.3 /usr/lib/libcudnn_ops_infer.so.8.3.3 /usr/lib/libcudnn_ops_train.so.8.3.3 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] mypy==0.942 [pip3] mypy-extensions==0.4.3 [pip3] numpy==1.22.3 [pip3] pytorch-lightning==1.7.0.dev0 [pip3] torch==1.11.0+cu113 [pip3] torchmetrics==0.7.3 [pip3] torchtext==0.12.0 [pip3] torchvision==0.12.0+cu113 [conda] Could not collect
4
5,981
75,721
Support batch indexing with sparse tensors with torch.sparse
module: sparse, triaged
### 🚀 The feature, motivation and pitch I am working on a problem that requires looking up sparse tensor values based on a batch of indices. The problem can be abstracted by the following example: ``` import torch a = torch.sparse_coo_tensor(indices=[[0, 2, 3, 6]], values=[0, 1, 2, 3], size=(10000000,)) index = [1, 2, 3] print(a[index]) ``` If `a` is a dense tensor (vector), then given a list of indices, `a[index]` will return a tensor with the i-th element as `a[index[i]]`. However, on sparse tensors, this operator is not supported, and will give the following error: ``` NotImplementedError: Could not run 'aten::index.Tensor' with arguments from the 'SparseCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::index.Tensor' is only available for these backends: [CPU, CUDA, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode]. CPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel] CUDA: registered at aten/src/ATen/RegisterCUDA.cpp:26496 [kernel] QuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel] BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback] Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback] Negative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback] ADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback] AutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] AutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel] Tracer: registered at ../torch/csrc/autograd/generated/TraceType_1.cpp:10664 [kernel] UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback] Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback] Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback] VmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] ``` To by-pass this, I will have to convert `a` to a dense tensor as ``` print(a.to_dense()[index]) ``` However, this is not memory-friendly, especially if `a` is very sparse and the the full size of `a` is huge. So I wonder there exists a way to batch-index sparse tensors -- the same way as dense tensors, or it is possible to support this in near future. Thanks! ### Alternatives _No response_ ### Additional context _No response_ cc @nikitaved @pearu @cpuhrsch
6
5,982
75,703
Let's host NVIDIA dependencies in our own S3
module: ci, triaged
### 🚀 The feature, motivation and pitch To avoid SEVs where we are unable to retrieve a dependency hosted by NVIDIA (see https://github.com/pytorch/pytorch/issues/74967), we should host copies in S3 ourselves. ### Alternatives _No response_ ### Additional context This was not possible before but may be possible now. cc @seemethere @malfet @pytorch/pytorch-dev-infra
14
5,983
75,701
Einsum should have an `out=` parameter
triaged, module: linear algebra
### 🚀 The feature, motivation and pitch When using `einsum` in performance sensitive code (and not caring about gradients), it is not good that it allocates a new tensor for the result. Then `numpy` version has an `out=` parameter, similar to `bmm` and many other pytorch functions, which allow specifying a piece of memory to store the result. It would be good to have this in pytorch as well. ### Alternatives Alternatively users can avoid einsum entirely, but there is not always a pytorch function with the same functionality. ### Additional context _No response_ cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano
7
5,984
75,680
Addressing skips in OpInfo nn.functional.binary_cross_entropy_with_logits
module: nn, module: tests, triaged
### 🐛 Describe the bug While implementing the OpInfo for nn.functional.binary_cross_entropy_with_logits a large number of skips were needed to pass the related tests. The PR for creating this ops info can be found here: #75604 Skips: ``` Python skips=( # The Weight tensor requires_grad = False DecorateInfo( unittest.skip("Skipped!"), "TestCommon", "test_floating_inputs_are_differentiable", dtypes=(torch.float32,) ), # Adding OpInfo to existing operator DecorateInfo( unittest.skip("Skipped!"), "TestCompositeCompliance", "test_backward", dtypes=(torch.float32,) ), # Pos Weight is required to be positve DecorateInfo( unittest.skip("Skipped!"), "TestMathBits", "test_neg_view", dtypes=(torch.float64,) ), # Test Gradient failures CI DecorateInfo( unittest.skip("Skipped!"), "TestGradients", "test_neg_view", dtypes=(torch.float64,) ), DecorateInfo( unittest.skip("Skipped!"), 'TestJit', 'test_variant_consistency_jit', dtypes=(torch.float32,) ), DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', "test_fn_gradgrad", dtypes=(torch.float64,)), DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', "test_forward_mode_AD", dtypes=(torch.float64,)), DecorateInfo(unittest.skip("Skipped!"), 'TestGradients', "test_fn_fwgrad_bwgrad", dtypes=(torch.float64,)), ), ``` ### Versions Not version specific cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
0
5,985
75,673
Tensorboard Issue with visualizing the connections of encoder-decoder network
oncall: visualization
We are trying to visualize the connections of the intermediate layers of a deep neural network using tensorboard. Our implementation is below: ``` model = build_model(cfg) path = 'output/model_0004999.pth' model.load_state_dict(torch.load(path)['model']) # get some random training images print(trainloader) dataiter = iter(trainloader) images = dataiter.next() # create grid of images img_grid = torchvision.utils.make_grid(images) writer=SummaryWriter('content/logsdir') # show images matplotlib_imshow(img_grid, one_channel=True) # write to tensorboard writer.add_image('mt_images', img_grid) writer.add_graph(model, images) writer.close() ``` However, we keep getting this error pointing to `jit` ``` Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions Error occurs, No graph saved Traceback (most recent call last): File "convert_model_tensorboard.py", line 96, in <module> writer.add_graph(model, [images]) File "/home/ubuntu/user/.virtualenv/lib/python3.6/site-packages/torch/utils/tensorboard/writer.py", line 723, in add_graph self._get_file_writer().add_graph(graph(model, input_to_model, verbose)) File "/home/ubuntu/user/.virtualenv/lib/python3.6/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 293, in graph raise e File "/home/ubuntu/user/.virtualenv/lib/python3.6/site-packages/torch/utils/tensorboard/_pytorch_graph.py", line 286, in graph trace = torch.jit.trace(model, args) File "/home/ubuntu/user/.virtualenv/lib/python3.6/site-packages/torch/jit/_trace.py", line 742, in trace _module_class, File "/home/ubuntu/user/.virtualenv/lib/python3.6/site-packages/torch/jit/_trace.py", line 940, in trace_module _force_outplace, RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions ``` ### Versions ``` Python 3.6.9 torch==1.8.0 torchsummary==1.5.1 torchvision==0.9.0 tensorboard==2.8.0 ```
1
5,986
75,667
Implement histc for bfloat16 on CPU
triaged, module: bfloat16, module: sorting and selection
### 🚀 The feature, motivation and pitch Add support for `histc` using bloat16 on cpu. Currently, this yields the error: ``` RuntimeError: "histogram_cpu" not implemented for 'BFloat16' ``` as detailed in [this issue at wandb](https://github.com/wandb/client/issues/3332) ### Alternatives _No response_ ### Additional context _No response_
0
5,987
93,746
Off main thread symbolic evaluation
module: internals, triaged, enhancement, oncall: pt2
Standard JIT design has compilation run off-thread, so that normal user code can continue running while compilation is still occurring. Unfortunately, torchdynamo's implementation in Python makes this difficult to do, as in standard Python there is a GIL which prevents useful parallelization of Python work. This means that we need to think about our off main thread strategy earlier rather than later, as it may imply architectural changes that are easier to do earlier in this project's lifetime. I can imagine a number of possible ways to skin this cat: * Put torchdynamo evaluation in a separate process * Put torchdynamo in a subinterpreter in the same process * All strategies involving a separate Python interpreter will have to somehow transmit all of the frame information as well as locals for evaluation * Rewrite torchdynamo in C++ (ugh) * Rewrite torchdynamo in RPython * Do nothing and pray the host program spends enough time in PyTorch operations that we can make useful progress with standard threading * Do nothing for symbolic evaluation, but offload FX graph compilation to another thread It would be good to know the PoR here. cc @bhosmer @smessmer @ljk53 @bdhirsh @soumith @msaroufim @wconstab @ngimel
4
5,988
75,662
multiprocessing and torch.tensor, Cannot allocate memory error
module: multiprocessing, triaged
### 🐛 Describe the bug Consider the simple example in the following, if I return a list at `func`, there will be no error, if I return a `torch.tensor` I will get this error: ```sh /work/aten/src/ATen/MapAllocator.cpp":263, please report a bug to PyTorch. unable to open shared memory object </torch_326297_506> in read-write mode')' ``` and If it use `set_sharing_strategy`, to avoid this error: ```python from multiprocessing import Pool from time import sleep import torch class A: def __init__(self, n_job, N): self.n_job = n_job self.N = N def func(self, i): # sleep(1) print(i) return torch.as_tensor([i]) # return [i] data = [] def read(self): torch.multiprocessing.set_sharing_strategy('file_system') with Pool(processes=self.n_job) as pool: data = pool.map(self.func, range(self.N)) return data if __name__ == "__main__": a = A(10, 2**16) data = a.read() print(len(data)) ``` I get another error says ```sh MapAllocator.cpp":323, please report a bug to PyTorch. unable to mmap 72 bytes from file </torch_326588_5707>: Cannot allocate memory (12) ``` is there any solution? I need to return the tensor, so just removing the tensor is not an option. I am using ### Versions ```sh PyTorch version: 1.10.2 Is debug build: False CUDA used to build PyTorch: Could not collect ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 Is CUDA available: False CUDA runtime version: 11.6.112 GPU models and configuration: GPU 0: NVIDIA RTX A5000 Nvidia driver version: 510.47.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.5 [pip3] torch==1.10.2 [conda] blas 1.0 mkl [conda] cudatoolkit 11.0.3 h15472ef_10 conda-forge [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py39h7f8727e_0 [conda] mkl_fft 1.3.1 py39hd3c417c_0 [conda] mkl_random 1.2.2 py39h51133e4_0 [conda] numpy 1.21.5 pypi_0 pypi [conda] numpy-base 1.21.2 py39h79a1101_0 [conda] pytorch 1.10.2 cpu_py39hfa7516b_0 ``` cc @VitalyFedyunin
5
5,989
75,659
Misleading documentation for cholesky_inverse
module: docs, triaged, module: linear algebra
### 📚 The doc issue https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html The documentation suggests that the inputted matrix should be the original positive-definite matrix A, where as the examples show that the user must first compute the cholesky u and input that into the cholesky_inverse function. ### Suggest a potential alternative/fix Change the parameters section so that it is clear that the input must be the cholesky decomposition. cc @svekars @holly1238 @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @Lezcano @brianjo
0
5,990
75,657
1.11.0 distribution train different with 1.8.1
oncall: distributed
If you have a question or would like help and support, please ask at our [forums](https://discuss.pytorch.org/). If you are submitting a feature request, please preface the title with [feature request]. If you are submitting a bug report, please fill in the following details. ## Issue description **I use 1.11.0 run distribution and gpu use like** | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 23723 C .../hubo/install/bin/python3 22143MiB | | 0 N/A N/A 23724 C .../hubo/install/bin/python3 1411MiB | | 0 N/A N/A 23725 C .../hubo/install/bin/python3 1409MiB | | 0 N/A N/A 23726 C .../hubo/install/bin/python3 1409MiB | | 0 N/A N/A 23727 C .../hubo/install/bin/python3 1411MiB | | 0 N/A N/A 23728 C .../hubo/install/bin/python3 1409MiB | | 0 N/A N/A 23729 C .../hubo/install/bin/python3 1409MiB | | 0 N/A N/A 23730 C .../hubo/install/bin/python3 1409MiB | | 1 N/A N/A 23724 C .../hubo/install/bin/python3 20841MiB | | 2 N/A N/A 23725 C .../hubo/install/bin/python3 22977MiB | | 3 N/A N/A 23726 C .../hubo/install/bin/python3 23971MiB | | 4 N/A N/A 23727 C .../hubo/install/bin/python3 13683MiB | | 5 N/A N/A 23728 C .../hubo/install/bin/python3 29723MiB | | 6 N/A N/A 23729 C .../hubo/install/bin/python3 16557MiB | | 7 N/A N/A 23730 C .../hubo/install/bin/python3 19963MiB | **but I use 1.8.1 gpu use like** | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 36861 C /data/hubo/install/bin/python3 27377MiB | | 1 36862 C /data/hubo/install/bin/python3 23477MiB | | 2 36863 C /data/hubo/install/bin/python3 24645MiB | | 3 36864 C /data/hubo/install/bin/python3 30633MiB | | 4 36865 C /data/hubo/install/bin/python3 21605MiB | | 5 36866 C /data/hubo/install/bin/python3 23303MiB | | 6 36867 C /data/hubo/install/bin/python3 28221MiB | | 7 36868 C /data/hubo/install/bin/python3 24285MiB | I want to know how to set 1.11.0 make other gpu don't run in gpu 0 Provide a short description. ## Code example Please try to provide a minimal example to repro the bug. Error messages and stack traces are also helpful. ## System Info Please copy and paste the output from our [environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py) (or fill out the checklist below manually). You can get the script and run it with: ``` wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py ``` - PyTorch or Caffe2: - How you installed PyTorch (conda, pip, source): - Build command you used (if compiling from source): - OS: - PyTorch version:3.9.12 - Python version:1.11.0 - CUDA/cuDNN version:11.3 - GPU models and configuration:v100 - GCC version (if compiling from source):10.1.0 - CMake version: None - Versions of any other relevant libraries: cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang
2
5,991
75,655
`jit(Function)` results in double execution
oncall: jit
### 🐛 Describe the bug `jit`-compiling a function that calls a custom function (`torch.autograd.Function`) results in the following behavior: * The contents of the custom function are added to TorchScript but the output of the jitted code is discarded * The Python version of the custom function will always be called before the TorchScript version and its output will be used in later code. So effectively, the custom function is executed twice! To reproduce: ```python import torch class MyFunction(torch.autograd.Function): @staticmethod def forward(ctx, *args, **kwargs): print("Python: ", end=" ") x = print_executing_forward(args[0]) return x + 1 @staticmethod def backward(ctx, *grad_args): pass my_function = MyFunction().apply @torch.jit.script_if_tracing def print_executing_forward(x: torch.Tensor): print("Executing forward()") return x def script(x): print("Tracing script...") return my_function(x) print("Tracing script_jit ...") script_jit = torch.jit.trace(script, torch.ones(())) for i in range(10): print(f"Running script_jit for the {i + 1}th time ...") script_jit(torch.ones(())) print() ``` This results in the following output (cropped): ``` [...] Running script_jit for the 9th time ... Python: Executing forward() Executing forward() Running script_jit for the 10th time ... Python: Executing forward() Executing forward() ``` ### Versions Collecting environment information... PyTorch version: 1.11.0+cu113 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31 Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.13.0-35-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 Nvidia driver version: 470.103.01 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.1 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.1 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.1 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.1 /usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn.so.8.2.1 /usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.2.1 /usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.2.1 /usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.2.1 /usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.2.1 /usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.2.1 /usr/local/cuda-11.3/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.2.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.5 [pip3] torch==1.11.0+cu113 [pip3] torchaudio==0.10.2+cu113 [pip3] torchvision==0.11.3+cu113 [conda] blas 1.0 mkl [conda] cpuonly 1.0 0 pytorch [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] mkl 2022.0.1 h06a4308_117 [conda] numpy 1.21.5 pypi_0 pypi [conda] pytorch-mutex 1.0 cpu pytorch [conda] torch 1.11.0+cu113 pypi_0 pypi [conda] torchaudio 0.10.2+cu113 pypi_0 pypi [conda] torchvision 0.11.3+cu113 pypi_0 pypi
3
5,992
75,654
jit fails when trying to assign values to model via hook
oncall: jit
### 🐛 Describe the bug I'm using hooks to assign new values to the model on forward passes. Jit script fails with: ``` RuntimeError: attribute assignment is not defined on python value of type 'Model Name': ``` For Example: ```python from typing import Iterable, Callable, Tuple from torch import Tensor, nn, jit from torchvision.models import resnet50 class FeatureExtractor(nn.Module): def __init__(self, model: nn.Module, layers: Iterable[str], num): super().__init__() self.model = model self.num = num for layer_id in layers: layer = dict([*self.model.named_modules()])[layer_id] layer.register_forward_hook(self.save_outputs_hook(layer_id)) def save_outputs_hook(self, layer_id: str) -> Callable: fe = self def fn(_, input: Tuple[Tensor], output): fe.num = 3 return fn def forward(self, x: Tensor): _ = self.model(x) if __name__ == '__main__': resnet_features = FeatureExtractor(resnet50(), layers=["layer4", "avgpool"], num=0) script = jit.script(resnet_features) ``` Fails with ``` RuntimeError: attribute assignment is not defined on python value of type 'FeatureExtractor': File "", line 20 def fn(_, input: Tuple[Tensor], output): fe.num = 3 ~~~~~~~~~~ <--- HERE ``` ### Versions PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: macOS 11.6 (x86_64) GCC version: Could not collect Clang version: 13.0.0 (clang-1300.0.29.30) CMake version: Could not collect Libc version: N/A Python version: 3.9.8 (main, Nov 10 2021, 09:21:22) [Clang 13.0.0 (clang-1300.0.29.3)] (64-bit runtime) Python platform: macOS-11.6-x86_64-i386-64bit Is CUDA available: False CUDA runtime version: No CUDA GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] pytorch-lightning==1.6.0 [pip3] torch==1.11.0 [pip3] torchmetrics==0.7.3 [pip3] torchvision==0.12.0 [conda] Could not collect
1
5,993
75,652
Op segfaults with ForwardAD and Subclassed Tensor as Tangent
triaged, module: forward ad
### 🐛 Describe the bug ```python import torch import torch.autograd.forward_ad as fwAD from torch.utils._pytree import tree_map import contextlib @contextlib.contextmanager def no_dispatch(): guard = torch._C._DisableTorchDispatch() try: yield finally: del guard class CompositeCompliantTensor(torch.Tensor): elem: torch.Tensor __slots__ = ['elem'] __torch_function__ = torch._C._disabled_torch_function_impl @staticmethod def __new__(cls, elem, *args, **kwargs): r = torch.Tensor._make_wrapper_subclass( # type: ignore[attr-defined] cls, elem.size(), dtype=elem.dtype, layout=elem.layout, device=elem.device, requires_grad=elem.requires_grad, strides=elem.stride(), storage_offset=elem.storage_offset()) if elem.requires_grad: r.elem = elem.detach().clone() else: r.elem = elem return r def __repr__(self): return f"CompositeCompliantTensor({self.elem})" @classmethod def __torch_dispatch__(cls, func, types, args=(), kwargs=None): def unwrap(e): return e.elem if isinstance(e, CompositeCompliantTensor) else e def wrap(e): return CompositeCompliantTensor(e) if isinstance(e, torch.Tensor) else e with no_dispatch(): unwrapped_args = tree_map(unwrap, args) unwrapped_kwargs = tree_map(unwrap, kwargs) unwrapped_rs = func(*unwrapped_args, **unwrapped_kwargs) rs = tree_map(wrap, unwrapped_rs) return rs CCT = CompositeCompliantTensor def fn(x, y): return torch.add(x, y) with fwAD.dual_level(): x_f = CCT(torch.tensor(-8.4784, requires_grad=True)) y_f = CCT(torch.tensor(-1.7658, requires_grad=True)) dual_input_1 = fwAD.make_dual(x_f, torch.randn_like(x_f)) dual_input_2 = fwAD.make_dual(y_f, torch.randn_like(y_f)) out = fn(dual_input_1, dual_input_2) print("1") dual_input_1 = fwAD.make_dual(x_f, torch.randn_like(x_f)) dual_input_2 = fwAD.make_dual(y_f, CCT(torch.randn_like(y_f))) out = fn(dual_input_1, dual_input_2) print("2") ``` The following script segfaults while performing the second operation. **NOTE**: Script works if we remove `with no_dispatch` in `__torch__dispatch__`. ### Versions master cc: @zou3519 @soulitzer
3
5,994
75,642
Navi 21 GPU hang when passing wrong input to embedding layer
in progress, module: nn, module: rocm, triaged
### 🐛 Describe the bug Passing an incorrect value to a `torch.nn.Embedding` layer (i.e. a value greater than the number of unique items an embedding layer is initialized to handle) causes a GPU reset/hang on AMD Navi 21 hardware installed in my system. My hardware specs are as follows: CPU: Ryzen 5900X GPU: Radeon 6900XT RAM: 2x 16GB Crucial Ballistix 3600 CL16 SSD: Crucial MX500 Mobo: MSI X570 Tomahawk Here's the software configuration: OS: Debian Testing Kernel: 5.17.1 (custom compiled) ROCm version: 5.1 Pytorch version: 1.12.0a0+git364055b (built from source) It also happens when using AMD's official `rocm5.0.1_ubuntu18.04_py3.7_pytorch_staging` docker image. Here's the code that causes the issue - it causes an error (but no system/GPU hang) on Nvidia hardware: ``` import torch import torch.nn as nn import pytorch_lightning as pl class MovieLensDummyDataset(torch.utils.data.Dataset): #dummy version of the dataset with synthetic data def __init__(self, n, n_users, n_movies): self.users = torch.randint(0,n_users,(n,)).type(torch.int32) self.items = torch.randint(0,n_users,(n,)).type(torch.int32) self.labels = torch.randint(0,2,(n,)).type(torch.uint8) def __getitem__(self, idx): return self.users[idx], self.items[idx], self.labels[idx] def __len__(self): return self.users.shape[0] class NCF(pl.LightningModule): """ Neural Collaborative Filtering (NCF) Args: num_users (int): Number of unique users num_items (int): Number of unique items ratings (pd.DataFrame): Dataframe containing the movie ratings for training all_movieIds (list): List containing all movieIds (train + test) """ def __init__(self, num_users, num_items): super().__init__() self.user_embedding = nn.Embedding(num_embeddings=num_users, embedding_dim=8) self.item_embedding = nn.Embedding(num_embeddings=num_items, embedding_dim=8) self.fc1 = nn.Linear(in_features=16, out_features=64) self.fc1_activation = nn.ReLU() self.fc2 = nn.Linear(in_features=64, out_features=32) self.fc2_activation = nn.ReLU() self.output = nn.Linear(in_features=32, out_features=1) self.out_activation = nn.Sigmoid() self.loss_func = nn.BCELoss() def forward(self, user_input, item_input): # Pass through embedding layers user_embedded = self.user_embedding(user_input) item_embedded = self.item_embedding(item_input) # Concat the two embedding layers vector = torch.cat([user_embedded, item_embedded], dim=-1) # Pass through dense layer vector = self.fc1_activation(self.fc1(vector)) vector = self.fc2_activation(self.fc2(vector)) # Output layer pred = self.out_activation(self.output(vector)) return pred def training_step(self, batch, batch_idx): user_input, item_input, labels = batch predicted_labels = self(user_input, item_input) loss = self.loss_func(predicted_labels, labels.view(-1, 1).float()) return loss def configure_optimizers(self): return torch.optim.Adam(self.parameters()) def train_dataloader(self): return self._train_dataloader def set_train_dataloader(self, dl): self._train_dataloader = dl def set_test_dataloader(self, dl): self._test_dataloader = dl num_users=13849 num_items=19103 num_samples=10142520 #the bug is here - the whole thing works if num_users supplied to the dataset is <= num_users supplied to the model - the issue is about the size of the embedding. #on nvidia hardware, this bug causes an error and the program stops. on amd hardware, supplying this leads to a GPU hang/reset train_dummy_ds = MovieLensDummyDataset(num_samples, 10*num_users, num_items) train_dummy_dl = torch.utils.data.DataLoader(train_dummy_ds, batch_size=2048, num_workers=12) model = NCF(num_users, num_items) model.set_train_dataloader(train_dummy_dl) trainer = pl.Trainer(max_epochs=5, gpus=1, progress_bar_refresh_rate=50, logger=False, checkpoint_callback=False, amp_backend='native') trainer.fit(model) ``` Here's the kernel log when the GPU hang occurs: ``` Apr 12 00:54:42 hostname kernel: [drm:amdgpu_dm_atomic_commit_tail [amdgpu]] *ERROR* Waiting for fences timed out! Apr 12 00:54:42 hostname kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx_0.0.0 timeout, signaled seq=159760, emitted seq=159762 Apr 12 00:54:42 hostname kernel: [drm:amdgpu_job_timedout [amdgpu]] *ERROR* Process information: process gnome-shell pid 2594 thread gnome-shel:cs0 pid 2616 Apr 12 00:54:42 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: GPU reset begin! Apr 12 00:54:42 hostname kernel: amdgpu: Failed to suspend process 0x8007 Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: [drm:amdgpu_ring_test_helper [amdgpu]] *ERROR* ring kiq_2.1.0 test failed (-110) Apr 12 00:54:43 hostname kernel: [drm:gfx_v10_0_hw_fini [amdgpu]] *ERROR* KGQ disable failed Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: [drm:amdgpu_ring_test_helper [amdgpu]] *ERROR* ring kiq_2.1.0 test failed (-110) Apr 12 00:54:43 hostname kernel: [drm:gfx_v10_0_hw_fini [amdgpu]] *ERROR* KCQ disable failed Apr 12 00:54:43 hostname kernel: [drm:gfx_v10_0_hw_fini [amdgpu]] *ERROR* failed to halt cp gfx Apr 12 00:54:43 hostname kernel: [drm] free PSP TMR buffer Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec68053da00 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec68094d200 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec6804f9600 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec680993200 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec680d5c000 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec680de8000 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec6804e9a00 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec6804eb600 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec680994e00 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: AMD-Vi: Event logged [IO_PAGE_FAULT domain=0x001a address=0xec82bbe3900 flags=0x0010] Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: MODE1 reset Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: GPU mode1 reset Apr 12 00:54:43 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: GPU smu mode1 reset Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: GPU reset succeeded, trying to resume Apr 12 00:54:44 hostname kernel: [drm] PCIE GART of 512M enabled (table at 0x0000008000300000). Apr 12 00:54:44 hostname kernel: [drm] VRAM is lost due to GPU reset! Apr 12 00:54:44 hostname kernel: [drm] PSP is resuming... Apr 12 00:54:44 hostname kernel: [drm] reserve 0xa00000 from 0x83fe000000 for PSP TMR Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: SECUREDISPLAY: securedisplay ta ucode is not available Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: SMU is resuming... Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: SMU is resumed successfully! Apr 12 00:54:44 hostname kernel: [drm] DMUB hardware initialized: version=0x02020003 Apr 12 00:54:44 hostname kernel: [drm] kiq ring mec 2 pipe 1 q 0 Apr 12 00:54:44 hostname kernel: [drm] VCN decode and encode initialized successfully(under DPG Mode). Apr 12 00:54:44 hostname kernel: [drm] JPEG decode initialized successfully. Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring gfx_0.0.0 uses VM inv eng 0 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.0.0 uses VM inv eng 1 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.1.0 uses VM inv eng 4 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.2.0 uses VM inv eng 5 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.3.0 uses VM inv eng 6 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.0.1 uses VM inv eng 7 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.1.1 uses VM inv eng 8 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.2.1 uses VM inv eng 9 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring comp_1.3.1 uses VM inv eng 10 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring kiq_2.1.0 uses VM inv eng 11 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring sdma0 uses VM inv eng 12 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring sdma1 uses VM inv eng 13 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring sdma2 uses VM inv eng 14 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring sdma3 uses VM inv eng 15 on hub 0 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring vcn_dec_0 uses VM inv eng 0 on hub 1 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring vcn_enc_0.0 uses VM inv eng 1 on hub 1 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring vcn_enc_0.1 uses VM inv eng 4 on hub 1 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring vcn_dec_1 uses VM inv eng 5 on hub 1 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring vcn_enc_1.0 uses VM inv eng 6 on hub 1 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring vcn_enc_1.1 uses VM inv eng 7 on hub 1 Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: ring jpeg_dec uses VM inv eng 8 on hub 1 Apr 12 00:54:44 hostname gnome-shell[2594]: amdgpu: amdgpu_cs_query_fence_status failed. Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: recover vram bo from shadow start Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: recover vram bo from shadow done Apr 12 00:54:44 hostname kernel: [drm] Skip scheduling IBs! Apr 12 00:54:44 hostname kernel: [drm] Skip scheduling IBs! Apr 12 00:54:44 hostname kernel: amdgpu 0000:2f:00.0: amdgpu: GPU reset(2) succeeded! Apr 12 00:54:44 hostname kernel: [drm] Skip scheduling IBs! Apr 12 00:54:44 hostname kernel: [drm] Skip scheduling IBs! Apr 12 00:54:44 hostname kernel: [drm] Skip scheduling IBs! ``` [Here's](https://pastebin.com/J017YHZU) what happens with Nvidia hardware instead (A100, Pytorch 1.11 via docker). ### Versions PyTorch version: 1.12.0a0+git364055b Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 5.0.13601-6b731c37 OS: Debian GNU/Linux bookworm/sid (x86_64) GCC version: (Debian 11.2.0-19) 11.2.0 Clang version: Could not collect CMake version: version 3.23.0 Libc version: glibc-2.33 Python version: 3.9.12 (main, Mar 24 2022, 13:02:21) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.17.1-x86_64-with-glibc2.33 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: AMD Radeon RX 6900 XT Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 50120.53.1 MIOpen runtime version: 2.15.0 Is XNNPACK available: True Versions of relevant libraries: [pip3] numpy==1.21.5 [pip3] pytorch-lightning==1.6.0 [pip3] torch==1.12.0a0+git364055b [pip3] torchmetrics==0.7.3 [conda] Could not collect cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @kshitij12345 @KyleCZH
5
5,995
75,634
[ONNX] Enable stacktrace print for TORCH_INTERNAL_ASSERT errors in export.
module: onnx, triaged, onnx-triaged
Using `TORCH_SHOW_CPP_STACKTRACES=1` shows more stacktrace for TORCH_INTERNAL_ASSERT errors. An option is to set this when exporting with `verbose=True`.
3
5,996
75,625
[ONNX] Support unit tests in scripting that we already support in tracing
module: onnx, triaged, onnx-triaged
- [x] Enable scripting tests that are currently working due to recent changes in pytorch export https://github.com/pytorch/pytorch/pull/77254 - [ ] Non-scriptable operators (as_strided, transformer_encoder) - [ ] Dict type (ONNX map) not supported https://github.com/pytorch/pytorch/issues/81482 - [ ] Shape/type inference issues - [ ] Issues related to support for kwargs/args https://github.com/pytorch/pytorch/issues/81478 - [ ] args* issues post compilation - [ ] Support for optional type inference - [ ] Input/output shape issues - [ ] Issues due to ONNX Spec Gaps - [ ] Failures from jit_pass_lower_tuple - [ ] Failures from jit_pass_mutation https://github.com/pytorch/pytorch/pull/79555 - [ ] User defined/custom class support - [ ] Model code not scriptable - [ ] Unscriptable modules - [x] Support for Union Types https://github.com/pytorch/pytorch/pull/77254 Migrated from the work item: <https://microsoft.sharepoint.com/:x:/r/teams/ONNX2/_layouts/15/Doc.aspx?sourcedoc=%7B2F53DDC4-54E9-488F-B5CB-C114937A3E90%7D&file=TorchScript%20Export%20Issues.xlsx&wdOrigin=OFFICECOM-WEB.MAIN.SEARCH&ct=1636674278879&action=default&mobileredirect=true&share=IQHE3VMv6VSPSLXLwRSTej6QAWZK7HjHmHw9xD4UMRwbV4U&cid=9094dc65-62dd-4898-86b4-84904add63f7>
1
5,997
75,599
kthvalue 20x slower than sort
module: performance, triaged, module: sorting and selection
### 🐛 Describe the bug An example task: find 2nd smallest element in a huge tensor on a cuda-device. 1) First approach via `torch.sort`: ```python import torch import time elapsed = 0.0 runs = 100 for i in range(runs): tensor = torch.rand(100_000_000, device='cuda:0') torch.cuda.synchronize() start = time.time() sorted, _ = tensor.sort() second_smallest_el = sorted[1] torch.cuda.synchronize() end = time.time() elapsed += end - start print(elapsed/runs) # 0.03 sec ``` 2) Second approach via `torch.kthvalue`: ```python import torch import time elapsed = 0.0 runs = 100 for i in range(runs): tensor = torch.rand(100_000_000, device='cuda:0') torch.cuda.synchronize() start = time.time() val, _ = torch.kthvalue(tensor, 2) torch.cuda.synchronize() end = time.time() elapsed += end - start print(elapsed/runs) # 0.68 sec ``` I would expect much better running time of the `kthvalue`, especially when `k` is as small as `2`. ### Versions Collecting environment information... PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Debian GNU/Linux 10 (buster) (x86_64) GCC version: (Debian 8.3.0-6) 8.3.0 Clang version: Could not collect CMake version: version 3.13.4 Libc version: glibc-2.28 Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-4.19.0-19-amd64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: GeForce RTX 3090 Nvidia driver version: 460.73.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [conda] blas 1.0 mkl [conda] cudatoolkit 11.3.1 h2bc3f7f_2 [conda] ffmpeg 4.3 hf484d3e_0 pytorch [conda] mkl 2021.4.0 h06a4308_640 [conda] mkl-service 2.4.0 py38h7f8727e_0 [conda] mkl_fft 1.3.1 py38hd3c417c_0 [conda] mkl_random 1.2.2 py38h51133e4_0 [conda] mypy-extensions 0.4.3 pypi_0 pypi [conda] numpy 1.22.3 pypi_0 pypi [conda] numpy-base 1.21.2 py38h79a1101_0 [conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchvision 0.12.0 py38_cu113 pytorch cc @VitalyFedyunin @ngimel
5
5,998
75,586
Add ZeroTensor support for `mm`
module: performance, module: autograd, triaged, actionable, ZeroTensor
As per the title. example PR: https://github.com/pytorch/pytorch/pull/71129 cc @VitalyFedyunin @ngimel @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7
0
5,999
75,582
Add `balance` flag to `random_split`
triaged, enhancement, module: data
### 🚀 The feature, motivation and pitch It will be much easier to add a balance flag tp `random_split` for balancing between classes and not needing to go to `Sampler` or `train_test_split` from `sklearn`. ### Alternatives `Sampler` or `train_test_split` from `sklearn` ### Additional context Just make it for me, please ^^ cc @VitalyFedyunin @ejguan @NivekT
7
6,000
75,577
Cannot use socks5h proxy because of urllib: `urlopen error Remote end closed connection without response`
triaged, module: vision, module: hub
### 🐛 Describe the bug ```python # either torch.hub.load('pytorch/vision', 'resnet50', pretrained=True) # or from torchvision.models.resnet import resnet50 resnet50(pretrained=True) ``` Basically what is happening is that I have set up `http[s]_proxy=socks5h://localhost:1080` environment variable for `urllib` to use it, and it is. The problem is that is not using the `keep-alive` header and the server is prematurely dropping the connection. If I use `requests` module it works just fine. If you do `urllib.request.urlopen("https://github.com/")` it will be the same error. ``` During handling of the above exception, another exception occurred: URLError Traceback (most recent call last) notebook.ipynb Cell 9' in <cell line: 5>() 1 # model = torch.hub.load('pytorch/vision', 'resnet50', pretrained=True) 3 from torchvision.models.resnet import resnet50 ----> 5 model = resnet50(pretrained=True) File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/site-packages/torchvision/models/resnet.py:331, in resnet50(pretrained, progress, **kwargs) 323 def resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: 324 r"""ResNet-50 model from 325 `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_. 326 (...) 329 progress (bool): If True, displays a progress bar of the download to stderr 330 """ --> 331 return _resnet("resnet50", Bottleneck, [3, 4, 6, 3], pretrained, progress, **kwargs) File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/site-packages/torchvision/models/resnet.py:296, in _resnet(arch, block, layers, pretrained, progress, **kwargs) 294 model = ResNet(block, layers, **kwargs) 295 if pretrained: --> 296 state_dict = load_state_dict_from_url(model_urls[arch], progress=progress) 297 model.load_state_dict(state_dict) 298 return model File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/site-packages/torch/hub.py:591, in load_state_dict_from_url(url, model_dir, map_location, progress, check_hash, file_name) 589 r = HASH_REGEX.search(filename) # r is Optional[Match[str]] 590 hash_prefix = r.group(1) if r else None --> 591 download_url_to_file(url, cached_file, hash_prefix, progress=progress) 593 if _is_legacy_zip_format(cached_file): 594 return _legacy_zip_load(cached_file, model_dir, map_location) File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/site-packages/torch/hub.py:457, in download_url_to_file(url, dst, hash_prefix, progress) 455 file_size = None 456 req = Request(url, headers={"User-Agent": "torch.hub"}) --> 457 u = urlopen(req) 458 meta = u.info() 459 if hasattr(meta, 'getheaders'): File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/urllib/request.py:214, in urlopen(url, data, timeout, cafile, capath, cadefault, context) 212 else: 213 opener = _opener --> 214 return opener.open(url, data, timeout) File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/urllib/request.py:517, in OpenerDirector.open(self, fullurl, data, timeout) 514 req = meth(req) 516 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method()) --> 517 response = self._open(req, data) 519 # post-process response 520 meth_name = protocol+"_response" File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/urllib/request.py:534, in OpenerDirector._open(self, req, data) 531 return result 533 protocol = req.type --> 534 result = self._call_chain(self.handle_open, protocol, protocol + 535 '_open', req) 536 if result: 537 return result File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/urllib/request.py:494, in OpenerDirector._call_chain(self, chain, kind, meth_name, *args) 492 for handler in handlers: 493 func = getattr(handler, meth_name) --> 494 result = func(*args) 495 if result is not None: 496 return result File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/urllib/request.py:1389, in HTTPSHandler.https_open(self, req) 1388 def https_open(self, req): -> 1389 return self.do_open(http.client.HTTPSConnection, req, 1390 context=self._context, check_hostname=self._check_hostname) File ~/.pyenv/versions/miniconda3-latest/lib/python3.9/urllib/request.py:1349, in AbstractHTTPHandler.do_open(self, http_class, req, **http_conn_args) 1346 h.request(req.get_method(), req.selector, req.data, headers, 1347 encode_chunked=req.has_header('Transfer-encoding')) 1348 except OSError as err: # timeout error -> 1349 raise URLError(err) 1350 r = h.getresponse() 1351 except: URLError: <urlopen error Remote end closed connection without response> ``` ### Versions ``` Collecting environment information... PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.3 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.16.3 Libc version: glibc-2.31 Python version: 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-4.15.0-101-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: Could not collect GPU models and configuration: GPU 0: NVIDIA TITAN RTX GPU 1: NVIDIA TITAN RTX Nvidia driver version: 510.39.01 cuDNN version: Could not collect ... [conda] numpy-base 1.21.2 py39h79a1101_0 [conda] pytorch 1.11.0 py3.9_cuda11.3_cudnn8.2.0_0 pytorch [conda] pytorch-mutex 1.0 cuda pytorch [conda] torchvision 0.12.0 py39_cu113 pytorch ``` cc @fmassa @vfdev-5 @pmeier @nairbv @NicolasHug @vmoens @jdsgomes
2