prithivMLmods commited on
Commit
3531ceb
·
verified ·
1 Parent(s): 87e517a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -123,14 +123,12 @@ model-index:
123
  name: Open LLM Leaderboard
124
  ---
125
 
126
- ![zzzzzzzz.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/BMUFGlv1AHfPrVZ0HBTFJ.png)
127
 
128
  # **Galactic-Qwen-14B-Exp2**
129
 
130
  Galactic-Qwen-14B-Exp2 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
131
 
132
- ![Model Performance Metrics - visual selection(1).png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/BOvTd5yP9LxLo0UJU3pgy.png)
133
-
134
  ## **Key Improvements**
135
  1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
136
  2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
@@ -138,6 +136,8 @@ Galactic-Qwen-14B-Exp2 is based on the Qwen 2.5 14B modality architecture, desig
138
  4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
139
  5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
140
 
 
 
141
  ## **Quickstart with transformers**
142
 
143
  Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
 
123
  name: Open LLM Leaderboard
124
  ---
125
 
126
+ ![Exp2.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/jq-on7WPMQC0ynvgs-kXB.png)
127
 
128
  # **Galactic-Qwen-14B-Exp2**
129
 
130
  Galactic-Qwen-14B-Exp2 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
131
 
 
 
132
  ## **Key Improvements**
133
  1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
134
  2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
 
136
  4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
137
  5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
138
 
139
+ ![Performance Metrics Diagram - visual selection.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/qrW-WJjBJyJ3Mknqb61Xy.png)
140
+
141
  ## **Quickstart with transformers**
142
 
143
  Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content: