Text Generation
Transformers
Safetensors
PyTorch
nvidia
conversational

Correctly points out that nvidia/NVIDIA-Nemotron-Nano-9B-v2 is finetuned from nvidia/NVIDIA-Nemotron-Nano-9B-v2-Base, instead of from nvidia/NVIDIA-Nemotron-Nano-12B-v2 and nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base

Thank you for the comment. The meta information is correct. The model training path is nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base -> nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base -> nvidia/NVIDIA-Nemotron-Nano-9B-v2 as described in the tech blog and tech report.

suhara changed pull request status to closed

Sign up or log in to comment