Schmip commited on
Commit
9bd1225
·
verified ·
1 Parent(s): 985aa05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - moe
11
  ---
12
  # Model Card for Mixtral-8x7B
13
- The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
14
 
15
  For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
16
 
 
10
  - moe
11
  ---
12
  # Model Card for Mixtral-8x7B
13
+ The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
14
 
15
  For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
16