|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- Open-Orca/OpenOrca |
|
- teknium/openhermes |
|
- cognitivecomputations/dolphin |
|
- jondurbin/airoboros-3.1 |
|
- unalignment/toxic-dpo-v0.1 |
|
- unalignment/spicy-3.1 |
|
language: |
|
- en |
|
--- |
|
|
|
 |
|
# The flower of Ares. |
|
[GGUF files here](https://huggingface.co/Kquant03/Hippolyta-7B-GGUF) |
|
|
|
Fine-tuned on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)...[my team and I](https://huggingface.co/ConvexAI) reformatted many different datasets and included a small amount of private stuff to see how much we could improve mistral. |
|
|
|
I spoke to it personally for about an hour, and I believe we need to work on our format for the private dataset a bit more, but other than that, it turned out great. I will be uploading it to open llm evaluations, today. |
|
|
|
- Uses Mistral prompt template with chat-instruct. |