ZoraBetaA1 - SuperCompanion
ZoraBetaA1 is Our Brand new AI Model, finetuned using Iris-Uncensored-Reformat-R2, ZoraBetaA1 showcase a Strong reasoning Capability With a Stronger Finetuned Bias toward Roleplaying Using Zephyr Beta 7B, ZoraBetaA1 also Shows a Great Companionship Capabilities, Without Hallucinating Much Unlike MistThena7B Finetuned Using Mistral 7b v0.1, This New Architecture allow us To Increase Roleplaying capabilities without Doing everything from scratch as Zephyr Beta has a Strong RP foundation already, Leading us to Scaffolding on this Architecture And Increasing Roleplaying capabilities further.
ZoraBetaA1 contains Cleaned Dataset, however its still relatively Unstable so please Report any issues found through our email [email protected] about any overfitting, or improvements for the future Models Once again feel free to Modify the LORA to your likings, However please consider Adding this Page for credits and if you'll increase its Dataset, then please handle it with care and ethical considerations
ZoraBetaA1 is
- Developed by: N-Bot-Int
- License: apache-2.0
- Parent Model from model: HuggingFaceH4/zephyr-7b-beta
- Dataset Combined Using: UltraDatasetCleanerAndMoshpit-R1(Propietary Software)
Notice
- For a Good Experience, Please use
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- For a Good Experience, Please use
Detail card:
Parameter
- 3 Billion Parameters
- (Please visit your GPU Vendor if you can Run 3B models)
Training
- 300 Steps from Iris-Dataset-Reformat-R1
Finetuning tool:
Unsloth AI
- This Zephyr model was trained 2x faster with Unsloth and Huggingface's TRL library.
- This Zephyr model was trained 2x faster with Unsloth and Huggingface's TRL library.
Fine-tuned Using:
Google Colab
- Downloads last month
- 30
Model tree for N-Bot-Int/ZoraBetaA1
Base model
mistralai/Mistral-7B-v0.1