I am now basing all future releases of the MFANN experiment using llama-3 as a base model, I may continue fine-tuning mistral-7b every other release

this model uses meta's llama-3 as its base, and benchmarks are pending

image/png

changed the model name to MFANNV0.6 due to a failed benchmark and the need to resubmit

edit: due to continuous benchmark fails I am renaming the model back to MFANNver0.6, the 3b model is also failing benchmarks for some reason despite the fact both models run fine on my machine :(

Downloads last month
4
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for netcat420/MFANNv0.6

Quantizations
1 model

Dataset used to train netcat420/MFANNv0.6