metadata
base_model: Meta/tiny-llama
language:
- en
- es
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
datasets:
- iamtarun/python_code_instructions_18k_alpaca
- jtatman/python-code-dataset-500k
- flytech/python-codes-25k
- Vezora/Tested-143k-Python-Alpaca
- codefuse-ai/CodeExercise-Python-27k
- Vezora/Tested-22k-Python-Alpaca
- mlabonne/Evol-Instruct-Python-26k
library_name: adapter-transformers
metrics:
- accuracy
- bertscore
- glue
- perplexity
Uploaded model
- Developed by: Agnuxo(https://github.com/Agnuxo1)
- License: apache-2.0
- Finetuned from model : Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Benchmark Results
This model has been fine-tuned for various tasks and evaluated on the following benchmarks:
accuracy
Accuracy: Not Available
bertscore
Bertscore: Not Available
glue
Glue: Not Available
perplexity
Perplexity: Not Available
Model Size: 4,124,864 parameters Required Memory: 0.02 GB
For more details, visit my GitHub.
Thanks for your interest in this model!