--- license: apache-2.0 datasets: - Open-Orca/OpenOrca - teknium/openhermes - cognitivecomputations/dolphin - jondurbin/airoboros-3.1 - unalignment/toxic-dpo-v0.1 - unalignment/spicy-3.1 language: - en --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/Avdyv2akp7TQTE_eaRHhl.jpeg) The priestess of Athena. Fine-tuned on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), [my team](https://huggingface.co/ConvexAI) and I reformatted many different datasets and included a small amount of private stuff to see how much we could improve mistral. I spoke to it personally for about an hour, and I believe we need to work on our format for the private dataset a bit more, but other than that, it turned out great. I will be uploading it to open llm evaluations, today.