Update README.md
Browse files
README.md
CHANGED
|
@@ -25,6 +25,16 @@ tags:
|
|
| 25 |
|
| 26 |
We present AmberChat, an instruction following model finetuned from [LLM360/Amber](https://huggingface.co/LLM360/Amber).
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
## Model Description
|
| 29 |
|
| 30 |
- **Model type:** Language model with the same architecture as LLaMA-7B
|
|
@@ -92,15 +102,6 @@ python3 -m fastchat.serve.cli --model-path LLM360/AmberChat
|
|
| 92 |
| model_max_length | 2048 |
|
| 93 |
|
| 94 |
|
| 95 |
-
# Evaluation
|
| 96 |
-
|
| 97 |
-
| Model | MT-Bench |
|
| 98 |
-
|------------------------------------------------------|------------------------------------------------------------|
|
| 99 |
-
| **LLM360/AmberChat** | **5.428125** |
|
| 100 |
-
| [LLM360/Amber](https://huggingface.co/LLM360/Amber) | 2.48750 |
|
| 101 |
-
| [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) | 5.17 |
|
| 102 |
-
| [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) | 5.42 |
|
| 103 |
-
| [Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b) | 5.51 |
|
| 104 |
|
| 105 |
# Using Quantized Models with Ollama
|
| 106 |
|
|
|
|
| 25 |
|
| 26 |
We present AmberChat, an instruction following model finetuned from [LLM360/Amber](https://huggingface.co/LLM360/Amber).
|
| 27 |
|
| 28 |
+
# Evaluation
|
| 29 |
+
|
| 30 |
+
| Model | MT-Bench |
|
| 31 |
+
|------------------------------------------------------|------------------------------------------------------------|
|
| 32 |
+
| **LLM360/AmberChat** | **5.428125** |
|
| 33 |
+
| [LLM360/Amber](https://huggingface.co/LLM360/Amber) | 2.48750 |
|
| 34 |
+
| [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) | 5.17 |
|
| 35 |
+
| [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) | 5.42 |
|
| 36 |
+
| [Nous-Hermes-13B](https://huggingface.co/NousResearch/Nous-Hermes-13b) | 5.51 |
|
| 37 |
+
|
| 38 |
## Model Description
|
| 39 |
|
| 40 |
- **Model type:** Language model with the same architecture as LLaMA-7B
|
|
|
|
| 102 |
| model_max_length | 2048 |
|
| 103 |
|
| 104 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
|
| 106 |
# Using Quantized Models with Ollama
|
| 107 |
|