Model won't load ollama on open ui
I get error 500: unable to load model
Think i had this when oss first came out but i hadn't updated ollama
Edit: fixed as of 11.5 i think
I get the same error with Ollama 0.11.4: Error: 500 Internal Server Error: unable to load model.
yes the version given in the model and "gpt-oss" and ollama wants to see "gptoss"
yes the version given in the model and "gpt-oss" and ollama wants to see "gptoss"
Can I fix this myself? I don't know a whole lot about this type of thing it was hard enough getting ollama working with my gpu. Haha
The term gpt-oss is written directly into the guff file. It can't be modified like that in gptoss because that would change the address of all internal data.
I modified ollama to access both, but then another error occurred because llama.cpp doesn't accept the new compression yet. I have to wait for llama.cpp to release an update.
So it has the command to run in Ollama, but it won't work? I've been trying for days to get rid of this 500 server error. My computer runs the official oss-20b, but it won't run any of the uncensored ones through Ollama. How did the creator make it for ollama if it doesn't work? Sorry, Im not an expert in the backend of ai's yet.
As far as I know, this is due to a llama.cpp version dependence not satisfied in Ollama yet. This requisite is generated from the original abliterated version.