Any luck doing inference in 8xA100?
#57
by
taytun
- opened
Did anybody able to run inferece in 8 X A100 (80GB GPUs)?
It was such a pain to be able to load it but unable to do infer
ENV:
cupy-cuda12x==13.4.1
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
torch==2.6.0
torchaudio==2.6.0
torchelastic==0.2.2
torchvision==0.21.0
transformers==4.51.1
vllm==0.8.3
GPUs
CODE:
from transformers import AutoProcessor, AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig,Llama4ForConditionalGeneration
import torch
# Load the model and tokenizer
model_id = "meta-llama/Llama-4-Scout-17B-16E-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
attn_implementation="flex_attention",
#quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
processor = AutoProcessor.from_pretrained(model_id)
messages = [{"role": "user","content": [{"type": "text", "text": "explain me the cuda"}]}]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True, return_dict=True,return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs,max_new_tokens=256,)
response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
print(response)
print(outputs[0])
Error message:
TorchRuntimeError: Failed running call_function <built-in function add>(*(FakeTensor(..., device='cuda:1', size=(), dtype=torch.int32), FakeTensor(..., device='cuda:0', size=(), dtype=torch.int64)), **{}):
Unhandled FakeTensor Device Propagation for aten.add.Tensor, found two different devices cuda:1, cuda:0
taytun
changed discussion title from
Any luck doing inference in 8xA100
to Any luck doing inference in 8xA100?
I am also having the same error in A100x8
Scout runs on 4xA100, with 4.51.1.
@taytun
have you tried running the code above with torchrun? torchrun --nproc_per_node=8 <script_above.py>
I'm on 4.51.2