Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

PrunaAI
/
Phi-3-mini-128k-instruct-GGUF-Imatrix-smashed

GGUF
pruna-ai
conversational
Model card Files Files and versions
xet
Community
7
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Why is the generated content always the same when I use this model?

#7 opened 9 months ago by
LiMuyi

Phi 3 tokenizer_config has been updated upstream

#6 opened about 1 year ago by
smcleod

Mistake in readme instructions

2
2
#5 opened about 1 year ago by
adamkdean

gibberish results when context is greater 2048

9
#4 opened about 1 year ago by
Bakanayatsu

Do they work with ollama? How was the conversion done for 128K, llama.cpp/convert.py complains about ROPE.

8
#2 opened about 1 year ago by
BigDeeper
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs