Enable inference & update examples in README to be compatible with soon-to-come ChatWidget (#149)
Browse files- Enable inference & update examples in README to be compatible with soon-to-come ChatWidget (1622fac21b8a6e014e6f50b2581a41fb6957b9cb)
Co-authored-by: Simon Brandeis <[email protected]>
README.md
CHANGED
|
@@ -6,7 +6,11 @@ language:
|
|
| 6 |
- de
|
| 7 |
- es
|
| 8 |
- en
|
| 9 |
-
inference:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
# Model Card for Mixtral-8x7B
|
| 12 |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|
|
|
|
| 6 |
- de
|
| 7 |
- es
|
| 8 |
- en
|
| 9 |
+
inference: true
|
| 10 |
+
widget:
|
| 11 |
+
- messages:
|
| 12 |
+
- role: user
|
| 13 |
+
content: What is your favorite condiment?
|
| 14 |
---
|
| 15 |
# Model Card for Mixtral-8x7B
|
| 16 |
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
|