Run Mistral model on Remote server
#94
by
icemaro
- opened
i am wondering if i can run this model on a remote server ?
is there a way to consume it with IU like interface , one like LM Studio except that the model will run on the remote server.
ollama + web ui (easy to run in docker)..
i use ooba's text gen as a server.
its setup to run as a service under Linux. Just pass parameters to it to be accessible on your network.
I want to run the Mistral model
i have put together a simple implementation guide using Runpod (for GPU), Google Colab (for inference), and Gradio (for UI) Here:
https://github.com/aigeek0x0/radiantloom-ai/blob/main/mixtral-8x7b-instruct-v-0.1-runpod-template.md