smajumdar94 commited on
Commit
9833ada
·
verified ·
1 Parent(s): 80c8336

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
  # OpenReasoning-Nemotron-1.5B Overview
15
 
16
  ## Description: <br>
17
- OpenReasoning-Nemotron-1.5B is a large language model (LLM) which is a derivative of Qwen2.5-1.5B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. The model supports a context length of 48K tokens. The OpenReasoning model is available in the following sizes: 1.5B, 7B and 14B and 32B. <br>
18
 
19
  This model is ready for commercial/non-commercial research use. <br>
20
 
@@ -88,7 +88,7 @@ messages = [
88
  ]
89
  outputs = pipeline(
90
  messages,
91
- max_new_tokens=48000,
92
  )
93
  print(outputs[0]["generated_text"][-1]['content'])
94
  ````
@@ -167,13 +167,13 @@ Network Architecture: Qwen-1.5B-Instruct
167
  **Input Type(s):** Text <br>
168
  **Input Format(s):** String <br>
169
  **Input Parameters:** One-Dimensional (1D) <br>
170
- **Other Properties Related to Input:** Context length up to 48,000 tokens <br>
171
 
172
  ## Output: <br>
173
  **Output Type(s):** Text <br>
174
  **Output Format:** String <br>
175
  **Output Parameters:** One-Dimensional (1D) <br>
176
- **Other Properties Related to Output:** Context length up to 48,000 tokens <br>
177
 
178
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
179
 
 
14
  # OpenReasoning-Nemotron-1.5B Overview
15
 
16
  ## Description: <br>
17
+ OpenReasoning-Nemotron-1.5B is a large language model (LLM) which is a derivative of Qwen2.5-1.5B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. The model supports a context length of 64K tokens. The OpenReasoning model is available in the following sizes: 1.5B, 7B and 14B and 32B. <br>
18
 
19
  This model is ready for commercial/non-commercial research use. <br>
20
 
 
88
  ]
89
  outputs = pipeline(
90
  messages,
91
+ max_new_tokens=64000,
92
  )
93
  print(outputs[0]["generated_text"][-1]['content'])
94
  ````
 
167
  **Input Type(s):** Text <br>
168
  **Input Format(s):** String <br>
169
  **Input Parameters:** One-Dimensional (1D) <br>
170
+ **Other Properties Related to Input:** Context length up to 64,000 tokens <br>
171
 
172
  ## Output: <br>
173
  **Output Type(s):** Text <br>
174
  **Output Format:** String <br>
175
  **Output Parameters:** One-Dimensional (1D) <br>
176
+ **Other Properties Related to Output:** Context length up to 64,000 tokens <br>
177
 
178
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
179