gemma2b-nirf-lookup-gguf
This is a GGUF conversion of coderop12/gemma2b-nirf-lookup-2025.
Model Details
- Original Model: coderop12/gemma2b-nirf-lookup-2025
- Format: GGUF (F16 precision)
- File Size: ~4.9 GB
- Architecture: Gemma 2B
- Specialization: NIRF (National Institutional Ranking Framework) lookup and ranking queries
Usage
With llama.cpp
./llama-cli -m gemma2b-nirf-lookup-gguf.gguf -p "What is the NIRF ranking methodology?"
With Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="gemma2b-nirf-lookup-gguf.gguf")
response = llm("What are the top NIRF ranked engineering colleges?")
print(response['choices'][0]['text'])
With Ollama
# First, create a Modelfile
echo 'FROM ./gemma2b-nirf-lookup-gguf.gguf' > Modelfile
ollama create gemma2b-nirf-lookup-gguf -f Modelfile
ollama run gemma2b-nirf-lookup-gguf "Explain NIRF ranking parameters"
Model Capabilities
This model is specifically fine-tuned for:
- NIRF ranking information and queries
- Indian higher education institutional data
- University and college ranking explanations
- Educational policy and framework questions
Technical Details
- Quantization: F16 (16-bit floating point)
- Context Length: 2048 tokens
- License: Follow original model license terms
- Converted using: llama.cpp conversion tools
Original Model License
Please refer to the original model repository for license information.
- Downloads last month
- 124
Hardware compatibility
Log In
to view the estimation
16-bit
Model tree for coderop12/gemma2b-nirf-lookup-gguf
Base model
google/gemma-2-2b
Finetuned
google/gemma-2-2b-it
Finetuned
coderop12/gemma2b-nirf-lookup-2025