GGUF
llama.cpp

Error: pull model manifest ollama

#4
by kudukudu - opened

Hi, I try to use the model with Ollama according to https://huggingface.co/docs/hub/ollama but it is not working.

command :
ollama run hf.co/google/gemma-7b-GGUF

result:

pulling manifest 
Error: pull model manifest: Get "Authentication%!r(MISSING)equired?nonce=VZsAKGnamjCp-giE0xXapw&scope=&service=&ts=1729508446": unsupported protocol scheme ""
Google org

Hi @kudukudu ,

when trying to use ollama run hf.co/google/gemma-7b-GGUF. This often happens because of how Ollama handles direct Hugging Face URLs, especially for gated models.

The most reliable way to get Gemma running with Ollama is to use the models directly from Ollama's official model library. These models are pre-configured and optimized for Ollama's ecosystem.

Instead of: ollama run hf.co/google/gemma-7b-GGUF

ollama run gemma:7b

Ollama maintains its own comprehensive library of popular models. When you use a tag like gemma:7b, Ollama knows to pull the appropriate Gemma 7B model directly from its own verified sources, by passing any complexities associated with directly parsing Hugging Face URLs or handling gated model access tokens in that specific context. Kindly try and let us know if you have any concerns.

Thank you.

Sign up or log in to comment