Update README.md
Browse files
README.md
CHANGED
@@ -123,14 +123,12 @@ model-index:
|
|
123 |
name: Open LLM Leaderboard
|
124 |
---
|
125 |
|
126 |
-

|
133 |
-
|
134 |
## **Key Improvements**
|
135 |
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
|
136 |
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
|
@@ -138,6 +136,8 @@ Galactic-Qwen-14B-Exp2 is based on the Qwen 2.5 14B modality architecture, desig
|
|
138 |
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
|
139 |
5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
140 |
|
|
|
|
|
141 |
## **Quickstart with transformers**
|
142 |
|
143 |
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
|
|
|
123 |
name: Open LLM Leaderboard
|
124 |
---
|
125 |
|
126 |
+

|
127 |
|
128 |
# **Galactic-Qwen-14B-Exp2**
|
129 |
|
130 |
Galactic-Qwen-14B-Exp2 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
|
131 |
|
|
|
|
|
132 |
## **Key Improvements**
|
133 |
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
|
134 |
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
|
|
|
136 |
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
|
137 |
5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
|
138 |
|
139 |
+

|
140 |
+
|
141 |
## **Quickstart with transformers**
|
142 |
|
143 |
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
|