Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

TheBloke
/
Llama-2-7B-GGUF

Text Generation
Transformers
GGUF
PyTorch
English
llama
facebook
meta
llama-2
Model card Files Files and versions Community
9
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

How to finetune locally downloaded llama-2-7b-chat.Q4_0.gguf

#8 opened about 1 year ago by
Sridevi17j

[AUTOMATED] Model Memory Requirements

#7 opened about 1 year ago by
model-sizer-bot

[AUTOMATED] Model Memory Requirements

#6 opened about 1 year ago by
model-sizer-bot

404 error

4
#5 opened over 1 year ago by
mrbigs

How do I convert flan-t5-large model to GGUF? Already tried convert.py from llama.cpp

5
#3 opened over 1 year ago by
niranjanakella

Maximum context length (512)

πŸ‘ 2
7
#2 opened over 1 year ago by
AsierRG55

Error -- ggml_allocr_alloc: not enough space in the buffer (needed 136059008, largest block available 16891904)

πŸ‘ 1
2
#1 opened over 1 year ago by
RajeshkumarV
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs