Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

TheBloke
/
Llama-2-7B-32K-Instruct-GPTQ

Text Generation
Transformers
Safetensors
English
llama
custom_code
text-generation-inference
4-bit precision
gptq
Model card Files Files and versions Community
3
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

OOM when quantizing for 32k context length

#3 opened over 1 year ago by
harshilp

Code is looking for 'modeling_flash_llama.py' on huggingface even though I have it in local folder

#2 opened over 1 year ago by
alexrider

Fine tuning this model further

1
#1 opened over 1 year ago by
sdranju
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs