Memory error when running with ollama
When running hf.co/google/gemma-3-12b-it-qat-q4_0-gguf with ollama, there is an error:
Error: llama runner process has terminated: error:invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x0 pc=0x7ff6dad15619]
goroutine 50 [running]:
github.com/ollama/ollama/ml/nn.(*Conv2D).Forward(...)
C:/a/ollama/ollama/ml/nn/convolution.go:10
github.com/ollama/ollama/model/models/gemma3.(*VisionModel).Forward(0xc000cf5c40, {0x7ff6dbc0b930, 0xc000fc7f00
But other models, such as gemma3 from ollama's official site, runs well.
Here is my environment:
- Windows 11 pro, build 26100.4349
- RTX5070 (12GB VRAM) with CUDA Version: 12.9
- 32 GB RAM
- ollama 0.9.5
Met the same error when running the gemma 3 -27b-it gguf
Hi @villivateur ,
The error message you're seeing, "Error: 500 Internal Server Error: llama runner process has terminated: error:invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x1005552d0],"
is a known bug in Ollama when running certain versions of the Gemma 3 multimodal models. This is not a problem with your command or environment, it's a software bug within Ollama itself.
There is an open issue opened for Ollama please visit the following GitHub URL to know more about the issue and updates.
Thanks.