anon-repair-bot commited on
Commit
4218a92
·
verified ·
1 Parent(s): 6f5cf0a

Fix: Ensure model is moved to same device as inputs in example code

Browse files

## Description

The example code triggers the following runtime error during inference when using GPU:
```python
RuntimeError: Expected all tensors to be on the same device, but got index is on cuda:0, different from other tensors on cpu.
```

## Changes

Replaced:
```python
model = AutoModelForSequenceClassification.from_pretrained(model_name)
```
with:
```python
model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)
```

## Testing
The code has been successfully tested and runs without error.

## Note
This contribution is part of an ongoing research initiative to systematically identify and correct faulty example code in Hugging Face Model Cards.
We would appreciate a timely review and integration of this patch to support code reliability and enhance reproducibility for downstream users.

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -299,7 +299,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp
299
 
300
  model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli"
301
  tokenizer = AutoTokenizer.from_pretrained(model_name)
302
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
303
 
304
  premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
305
  hypothesis = "The movie was good."
 
299
 
300
  model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli"
301
  tokenizer = AutoTokenizer.from_pretrained(model_name)
302
+ model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)
303
 
304
  premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
305
  hypothesis = "The movie was good."