Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mlabonne
/
Beyonder-4x7B-v2

Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
Merge
mergekit
Mistral
openchat/openchat-3.5-1210
beowolx/CodeNinja-1.0-OpenChat-7B
maywell/PiVoT-0.1-Starling-LM-RP
WizardLM/WizardMath-7B-V1.1
conversational
Eval Results
text-generation-inference
Model card Files Files and versions
xet
Community
10
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Any comparison between the embed methods and adding pos/neg prompts?

1
#9 opened over 1 year ago by
adi-kmt

How does MergeKit's Moe Integration work?

6
#8 opened over 1 year ago by
arhanovich

Wrong prompt format in tokenizer_config.json?

3
#7 opened over 1 year ago by
wolfram

Which merge method was used ? Linear ?

👍 1
3
#5 opened over 1 year ago by
aaagggddd

The performance is lower than the base model?

1
#4 opened over 1 year ago by
Spico

Max Context Lenght

1
#3 opened over 1 year ago by
Desm0nt

code bench

1
#2 opened over 1 year ago by
rombodawg
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs