sophiamyang commited on
Commit
ffe1a70
·
verified ·
1 Parent(s): 985aa05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -8,6 +8,8 @@ language:
8
  - en
9
  tags:
10
  - moe
 
 
11
  ---
12
  # Model Card for Mixtral-8x7B
13
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
 
8
  - en
9
  tags:
10
  - moe
11
+
12
+ extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/fr/terms/">Privacy Policy</a>.
13
  ---
14
  # Model Card for Mixtral-8x7B
15
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.