Trained on a 3090. took 9 hours, it's 27s/it and default configured to 1218 iterations. commit: a2607fa - https://github.com/tloen/alpaca-lora
it's for the 7B model
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.