Instructions to use srisree/nano_coder_GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use srisree/nano_coder_GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="srisree/nano_coder_GGUF", filename="nano_coder.Q8_0.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use srisree/nano_coder_GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf srisree/nano_coder_GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf srisree/nano_coder_GGUF:Q8_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf srisree/nano_coder_GGUF:Q8_0 # Run inference directly in the terminal: llama-cli -hf srisree/nano_coder_GGUF:Q8_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf srisree/nano_coder_GGUF:Q8_0 # Run inference directly in the terminal: ./llama-cli -hf srisree/nano_coder_GGUF:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf srisree/nano_coder_GGUF:Q8_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf srisree/nano_coder_GGUF:Q8_0
Use Docker
docker model run hf.co/srisree/nano_coder_GGUF:Q8_0
- LM Studio
- Jan
- Ollama
How to use srisree/nano_coder_GGUF with Ollama:
ollama run hf.co/srisree/nano_coder_GGUF:Q8_0
- Unsloth Studio new
How to use srisree/nano_coder_GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for srisree/nano_coder_GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for srisree/nano_coder_GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for srisree/nano_coder_GGUF to start chatting
- Pi new
How to use srisree/nano_coder_GGUF with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf srisree/nano_coder_GGUF:Q8_0
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "srisree/nano_coder_GGUF:Q8_0" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use srisree/nano_coder_GGUF with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf srisree/nano_coder_GGUF:Q8_0
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default srisree/nano_coder_GGUF:Q8_0
Run Hermes
hermes
- Docker Model Runner
How to use srisree/nano_coder_GGUF with Docker Model Runner:
docker model run hf.co/srisree/nano_coder_GGUF:Q8_0
- Lemonade
How to use srisree/nano_coder_GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull srisree/nano_coder_GGUF:Q8_0
Run and chat with the model
lemonade run user.nano_coder_GGUF-Q8_0
List all available models
lemonade list
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
NanoCoder is a Fill-in-the-Middle (FIM) language model specifically designed for React frontend development and coding assistance. It helps users with intelligent code autocompletion and context-aware generation. The model was fine-tuned using Unsloth on the Qwen 3 0.6B base model, leveraging a high-quality FIM dataset curated from GitHub repositories to enhance coding capabilities and developer productivity.
🧠 Datasets
We trained NanoCoder using a high-quality Fill-in-the-Middle (FIM) dataset curated from GitHub repositories:
srisree/nextjs_typescript_fim_dataset on Hugging Face.
This dataset focuses on React/Next.js and TypeScript projects, providing rich, real-world coding examples that help the model understand frontend architecture, component composition, and React ecosystem patterns.
By leveraging this dataset, NanoCoder learns to:
- Predict and fill missing code intelligently using FIM objectives.
- Understand React component structures and TypeScript typing patterns.
- Generate clean, production-grade frontend code snippets.
⚙️ FIM Training Colab Script
We’re preparing an interactive Google Colab notebook for reproducing the Fill-in-the-Middle (FIM) fine-tuning process used to train NanoCoder with Unsloth on the Qwen 3 0.6B base model.
The Colab script will include:
- ✅ Environment setup with Unsloth and Qwen 3 0.6B
- ✅ Loading and preprocessing the Next.js TypeScript FIM Dataset
- ✅ Training configuration (LoRA, batch size, sequence length, etc.)
- ✅ Evaluation and inference examples
🚀 Coming soon... Stay tuned for the full release!
⚙️ Setup and Run NanoCoder Locally with Ollama in VS Code
Step-by-step guide to install, configure, and use NanoCoder for intelligent React frontend code completion with the Continue VS Code extension.
🧠 Prerequisites
Before getting started, ensure you have the following installed:
- VS Code
- Ollama (latest version)
- Continue extension
- A system with at least 8GB RAM (recommended for 0.6B models)
🧩 Step 1: Install Ollama
If you haven’t already, download and install Ollama:
- macOS / Linux / Windows: https://ollama.ai/download
Once installed, open your terminal and verify the installation.
💾 Step 2: Pull NanoCoder Model
ollama pull srisree/nanocoder
⚡ Step 3: Run NanoCoder with Ollama
Once downloaded, you can test NanoCoder directly in the terminal:
ollama run nanocoder
Read more Continue Docs
- Downloads last month
- 37
8-bit
