rpand002 commited on
Commit
9815bc7
·
verified ·
1 Parent(s): f3b89ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,7 +13,7 @@ base_model:
13
  # Granite-3.3-2B-Instruct
14
 
15
  **Model Summary:**
16
- Granite-3.3-2B-Instruct is a 2-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-2B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It also supports Fill-in-the-Middle (FIM) for code completion tasks and structured reasoning through \<think\>\<\/think\> and \<response\>\<\/response\> tags, providing clear separation between internal thoughts and final outputs. The model has been trained on a carefully balanced combination of permissively licensed data and curated synthetic tasks.
17
 
18
 
19
  - **Developers:** Granite Team, IBM
@@ -29,7 +29,7 @@ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian,
29
  This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
30
 
31
  **Capabilities**
32
- * **Thinking**
33
  * Summarization
34
  * Text classification
35
  * Text extraction
@@ -38,7 +38,7 @@ This model is designed to handle general instruction-following tasks and can be
38
  * Code related tasks
39
  * Function-calling tasks
40
  * Multilingual dialog use cases
41
- * **Fill-in-the-middle**
42
  * Long-context tasks including long document/meeting summarization, long document QA, etc.
43
 
44
 
 
13
  # Granite-3.3-2B-Instruct
14
 
15
  **Model Summary:**
16
+ Granite-3.3-2B-Instruct is a 2-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities. Built on top of Granite-3.3-2B-Base, the model delivers significant gains on benchmarks for measuring generic performance including AlpacaEval-2.0 and Arena-Hard, and improvements in mathematics, coding, and instruction following. It has also been trained with Fill-in-the-Middle (FIM) for code completion tasks and supports structured reasoning through \<think\>\<\/think\> and \<response\>\<\/response\> tags, providing clear separation between internal thoughts and final outputs. The model has been trained on a carefully balanced combination of permissively licensed data and curated synthetic tasks.
17
 
18
 
19
  - **Developers:** Granite Team, IBM
 
29
  This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
30
 
31
  **Capabilities**
32
+ * Thinking
33
  * Summarization
34
  * Text classification
35
  * Text extraction
 
38
  * Code related tasks
39
  * Function-calling tasks
40
  * Multilingual dialog use cases
41
+ * Fill-in-the-middle
42
  * Long-context tasks including long document/meeting summarization, long document QA, etc.
43
 
44