metadata
license: mit
language:
- en
library_name: transformers
tags:
- qwen
- qwen3
- qwen3-1.7b
- text-generation
- AMD
- Ryzen
- NPU
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-1.5B-Instruct
🐉 Qwen3 1.7B – Optimized for FastFlowLM on AMD Ryzen™ AI NPU (XDNA2 Only)
Model Summary
This model is derived from Qwen3 1.7B Instruct by Alibaba Cloud. While based on the 1.5B series, this variant has been optimized with quantization and runtime tuning specifically for AMD Ryzen™ AI NPUs using the FastFlowLM runtime.
✅ This model is released under the permissive MIT License.
📝 License & Usage Terms
Base Model License
Licensed under MIT by Alibaba Cloud:
👉 https://huggingface.co/Qwen/Qwen3-1.5B-InstructKey permissions:
- Free for commercial and non-commercial use
- Redistribution and modification permitted
- No attribution required (though encouraged)
Redistribution Notice
- This repository does not contain original or fine-tuned base weights.
- You must download the base weights directly from the official Qwen page:
👉 https://huggingface.co/Qwen/Qwen3-1.5B-Instruct
If Fine-tuned
If this version includes quantization or additional training:
- Base Model License: MIT
- Derivative Weights License: [e.g., MIT, CC-BY-NC-4.0, custom]
- Training Dataset License(s):
- [Dataset A] – [license]
- [Dataset B] – [license]
It is your responsibility to ensure compliance with the dataset licenses.
Intended Use
- Target Applications: On-device LLM, embedded NLP, NPU inference, research
- Not Recommended For: High-stakes decisions or commercial deployment without further testing
Limitations & Risks
- Smaller model may underperform on complex generation tasks
- May reflect biases in pretraining data
- Not suitable for sensitive or regulated use cases without auditing
Citation
@misc{qwen32024,
title={Qwen3: Smaller, Smarter, and More Open},
author={Alibaba Cloud},
year={2024},
url={https://huggingface.co/Qwen}