Run Locally With LM Studio
#3
by
isaiahbjork
- opened
Hey I made a repo to make it easy to run locally with LM Studio using less than 3GB.
Can you provide the colab link for your code?
Excellent work @isaiahbjork , works directly with llama.cpp as well:
llama-server -m S:\orpheus-3b-0.1-ft-q4_k_m.gguf -c 8192 -ngl 29 --host 0.0.0.0 --port 1234 --cache-type-k q8_0 --cache-type-v q8_0 -fa --mlock
Thanks!! This looks great - I'll include it in the readme of the main repo!
amuvarma
changed discussion status to
closed