update the model card to reflect the non-reproducibility of benchmarks (#74)
Browse files- update the model card to reflect the non-reproducibility of benchmarks (079ecaacd3b70854e46de6e01d2d846704976c27)
Co-authored-by: Clem 🤗 <[email protected]>
README.md
CHANGED
|
@@ -8,14 +8,11 @@ library_name: transformers
|
|
| 8 |
|
| 9 |
| IMPORTANT — This is the new, working version of the Reflection Llama 3.1 70B model. Please use this version.
|
| 10 |
|
| 11 |
-
**Reflection Llama-3.1 70B is
|
| 12 |
|
| 13 |
The model was trained on synthetic data generated by [Glaive](https://glaive.ai). If you're training a model, Glaive is incredible — use them.
|
| 14 |
|
| 15 |
## Benchmarks
|
| 16 |
-

|
| 17 |
-
|
| 18 |
-
All benchmarks tested have been checked for contamination by running [LMSys's LLM Decontaminator](https://github.com/lm-sys/llm-decontaminator). When benchmarking, we isolate the `<output>` and benchmark on solely that section.
|
| 19 |
|
| 20 |
Trained from Llama 3.1 70B Instruct, you can sample from Reflection Llama-3.1 70B using the same code, pipelines, etc. as any other Llama model. It even uses the stock Llama 3.1 chat template format (though, we've trained in a few new special tokens to aid in reasoning and reflection).
|
| 21 |
|
|
|
|
| 8 |
|
| 9 |
| IMPORTANT — This is the new, working version of the Reflection Llama 3.1 70B model. Please use this version.
|
| 10 |
|
| 11 |
+
**Reflection Llama-3.1 70B is an open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.**
|
| 12 |
|
| 13 |
The model was trained on synthetic data generated by [Glaive](https://glaive.ai). If you're training a model, Glaive is incredible — use them.
|
| 14 |
|
| 15 |
## Benchmarks
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
Trained from Llama 3.1 70B Instruct, you can sample from Reflection Llama-3.1 70B using the same code, pipelines, etc. as any other Llama model. It even uses the stock Llama 3.1 chat template format (though, we've trained in a few new special tokens to aid in reasoning and reflection).
|
| 18 |
|