DeepSeek-r1-distill-qwen-7B
Run DeepSeek-r1-distill-qwen-7B optimized for Intel NPUs with nexaSDK.
Quickstart
Install nexaSDK and create a free account at sdk.nexa.ai
Activate your device with your access token:
nexa config set license '<access_token>'Run the model on NPU in one line:
nexa infer NexaAI/deepSeek-r1-distill-qwen-7B-intel-npu
Model Description
deepSeek-r1-distill-qwen-7B is a distilled variant of DeepSeek-R1, built on the Qwen-7B architecture.
It is designed for efficient reasoning and instruction-following while maintaining strong performance across coding, logic, and multilingual tasks. Distillation compresses the capabilities of larger DeepSeek models into a lighter 7B parameter model, making it more practical for edge deployment and resource-constrained environments.
Features
- Distilled from DeepSeek-R1: Retains core reasoning strengths in a smaller, faster footprint.
- Instruction-tuned: Optimized for comprehension, logic, and task completion.
- Multilingual coverage: Handles diverse language inputs with improved efficiency.
- Compact yet capable: Balances performance with deployability on a wide range of hardware.
Use Cases
- Conversational AI and instruction-following assistants.
- Coding support, debugging, and algorithmic reasoning.
- Multilingual content generation and translation.
- Lightweight deployment on edge or limited-resource devices.
Inputs and Outputs
Input: Text prompts including natural language queries, instructions, or code snippets.
Output: Direct responses—answers, explanations, code, or translations—without extra reasoning annotations.
License
- Licensed under Apache-2.0
References
- Downloads last month
- 38