I'm most interested in content rerouting between LLM and VLLM agens for automation possibilities. Using templates for each agent which is then filled in by another agents inputs seems really useful.
We’re excited to share that Llama Nemotron Super v1.5 -- our latest open reasoning model -- is leading the Artificial Analysis Intelligence Index - a leaderboard that spans advanced math, science, and agentic tasks, for models running on a single NVIDIA H100. Super v1.5 is trained with high-quality reasoning synthetic data generated from models like Qwen3-235B and DeepSeek R1. Besides leading accuracy, it also delivers high throughput. Key features: - Leading accuracy on multi-step reasoning, math, coding, and function-calling - Post-trained using RPO, DPO, and RLVR across 26M+ synthetic examples - Fully transparent training data on HF (Nemotron-Post-Training-Dataset-v1)
Try Super v1.5 on build.nvidia.com or download from Hugging Face