Question: Clarification on Qwen License Agreement Regarding Output Usage
I came across the following clause in the Qwen LICENSE AGREEMENT:
"If you use the Materials or any outputs or results therefrom to create, train, fine-tune, or improve an AI model that is distributed or made available, you shall prominently display 'Built with Qwen' or 'Improved using Qwen' in the related product documentation."
Based on this, my understanding is that, as long as the required phrase (“Built with Qwen” or “Improved using Qwen”) is clearly displayed, it is possible to fine-tune other LLMs (e.g. Qwen2.5-0.5B) or train embedding models using outputs of Qwen2.5-72B-Instruct and then distribute them under a different license, such as Apache-2.0.
Is this interpretation correct? Could I use Qwen2.5-72B-Instruct in these scenarios and still release the resulting models under a different license?
Thanks for any clarification!
Hi,
It is imperative to distinguish between the utilization of model parameters or weights and the application of model outputs or results. Specifically, concerning Qwen2.5-72B-Instruct, the utilization of its outputs or results is permissible provided certain stipulated conditions are adhered to. These outputs or results can be employed for the fine-tuning of additional models; however, it is crucial to note that these additional models might be subject to distinct licensing agreements, which must also be respected.
For instance, should one elect to fine-tune Qwen2.5-0.5B, which operates under the Apache 2.0 license, utilizing the outputs derived from Qwen2.5-72B-Instruct, the resultant model would need to comply with both sets of licensing requirements:
- In accordance with the licensing terms of Qwen2.5-72B-Instruct, any distribution or release of a fine-tuned model must prominently display acknowledgment such as “Built with Qwen” or “Improved using Qwen.”
- In alignment with the licensing terms of Qwen2.5-0.5B under Apache 2.0:
- A copy of the original license must accompany the fine-tuned model.
- Any modifications made during the fine-tuning process must be documented. Notably, while the base model remains governed by Apache 2.0, a different license may be applied to the modifications introduced during the fine-tuning phase, provided that this does not infringe upon the terms set forth by the original license.
Thank you for the clarification. Just to confirm my understanding: when fine-tuning Qwen2.5-0.5 using outputs from Qwen2.5-72B-Instruct and releasing it, it is necessary to display acknowledgments “Built with Qwen” or “Improved using Qwen,” but it is not required to release the fine-tuned model under the Qwen license. Is that correct?
when fine-tuning Qwen2.5-0.5 using outputs from Qwen2.5-72B-Instruct and releasing it, it is necessary to display acknowledgments “Built with Qwen” or “Improved using Qwen,” but it is not required to release the fine-tuned model under the Qwen license. Is that correct?
yes
Thank you very much for the confirmation! I appreciate your help in clarifying this.