Spaces:
Running
100% LOCAL USE???
Is there a way to literally run all of this locally with Ollama or LMStudio without any external APIs and if we use, for example, LMStudio to have a model selection because when curl http://192.168.56.1:1234/v1/models
all models are listed.. I like this project, but with these limitations I would like it locally
I really want to use this with LM Studio.
you cant run this with local model for now because this models don't support this level of programming
I have local servers to test, but I do have other models that seem larger and more complex running locally.
I do se this possible being ran locally, I'm just not sure how much GPU is needed and if any throttling methods have been implemented.
You can use my api it's very cheap almost 70% off price compare to other provider