loading as llama model

#4
by KnutJaegersberg - opened

seems to load as DeciLMForCausalLM, is it possible to load it as llama for easy compatibility?

NVIDIA org

Sorry, our custom architecture is not supported by modeling_llama.py

KnutJaegersberg changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment