Jan-Nano-128k: Empowering deeper research through extended context understanding.

Authors: Alan Dao, Bach Vu Dinh
Overview
Jan-Nano-128k represents a significant advancement in compact language models for research applications. Building upon the success of Jan-Nano, this enhanced version features a native 128k context window that enables deeper, more comprehensive research capabilities without the performance degradation typically associated with context extension methods.
Key Improvements:
- π Research Deeper: Extended context allows for processing entire research papers, lengthy documents, and complex multi-turn conversations
- β‘ Native 128k Window: Built from the ground up to handle long contexts efficiently, maintaining performance across the full context range
- π Enhanced Performance: Unlike traditional context extension methods, Jan-Nano-128k shows improved performance with longer contexts
This model maintains full compatibility with Model Context Protocol (MCP) servers while dramatically expanding the scope of research tasks it can handle in a single session.
Evaluation
Jan-Nano-128k has been rigorously evaluated on the SimpleQA benchmark using our MCP-based methodology, demonstrating superior performance compared to its predecessor:
Why Jan-Nano-128k?
Traditional approaches to extending context length, such as YaRN (Yet another RoPE extensioN), often result in performance degradation as context length increases. Jan-Nano-128k breaks this paradigm:
This fundamental difference makes Jan-Nano-128k ideal for research applications requiring deep document analysis, multi-document synthesis, and complex reasoning over large information sets.
π₯οΈ How to Run Locally
Jan desktop will eventually support this model (WIP). Otherwise you can check the deployment options below that we have tested.
For additional tutorials and community guidance, visit our Discussion Forums.
Deployment
Deploy using VLLM:
vllm serve Menlo/Jan-nano-128k \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes \
--rope-scaling '{"rope_type":"yarn","factor":3.2,"original_max_position_embeddings":40960}' --max-model-len 131072
Or llama-server
from llama.cpp
:
llama-server ... --rope-scaling yarn --rope-scale 3.2 --yarn-orig-ctx 40960
Note: The chat template is included in the tokenizer. For troubleshooting, download the Non-think chat template.
Recommended Sampling Parameters
Temperature: 0.7
Top-p: 0.8
Top-k: 20
Min-p: 0.0
π€ Community & Support
- Discussions: HuggingFace Community
- Issues: GitHub Repository
- Documentation: Official Docs
π Citation
@model{jan-nano-128k,
title={Jan-Nano-128k: Deep Research with Extended Context},
author={Dao, Alan and Dinh},
year={2024},
url={https://huggingface.co/Menlo/Jan-nano-128k}
}
Jan-Nano-128k: Empowering deeper research through extended context understanding.
- Downloads last month
- 1,267