gghfez/Electra_Elorablate_Lora_v0.1-F16-GGUF
This LoRA adapter was converted to GGUF format from e-n-v-y/Electra_Elorablate_Lora_v0.1
via the ggml.ai's GGUF-my-lora space.
Refer to the original adapter repository for more details.
Use with llama.cpp
# with cli
llama-cli -m base_model.gguf --lora Electra_Elorablate_Lora_v0.1-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Electra_Elorablate_Lora_v0.1-f16.gguf (...other args)
To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.
- Downloads last month
- 11
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for gghfez/Electra_Elorablate_Lora_v0.1-F16-GGUF
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct
Finetuned
EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
Finetuned
Steelskull/L3.3-Electra-R1-70b