nielsr HF Staff commited on
Commit
efa5756
·
verified ·
1 Parent(s): b317545

Add link to Github repository

Browse files

This PR improves the model card by adding a direct link to the Github repository.

Files changed (1) hide show
  1. README.md +17 -9
README.md CHANGED
@@ -1,18 +1,18 @@
1
  ---
2
  base_model: LGAI-EXAONE/EXAONE-Deep-7.8B
3
- base_model_relation: quantized
4
- license: other
5
- license_name: exaone
6
- license_link: LICENSE
7
  language:
8
  - en
9
  - ko
 
 
 
 
 
10
  tags:
11
  - lg-ai
12
  - exaone
13
  - exaone-deep
14
- pipeline_tag: text-generation
15
- library_name: transformers
16
  ---
17
 
18
  <p align="center">
@@ -62,7 +62,12 @@ llama-cli -m ./EXAONE-Deep-7.8B-BF16.gguf \
62
  --temp 0.6 \
63
  --top-p 0.95 \
64
  --jinja \
65
- --chat-template "{% for message in messages %}{% if loop.first and message['role'] != 'system' %}{{ '[|system|][|endofturn|]\n' }}{% endif %}{% set content = message['content'] %}{% if '</thought>' in content %}{% set content = content.split('</thought>')[-1].lstrip('\\n') %}{% endif %}{{ '[|' + message['role'] + '|]' + content }}{% if not message['role'] == 'user' %}{{ '[|endofturn|]' }}{% endif %}{% if not loop.last %}{{ '\n' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '\n[|assistant|]<thought>\n' }}{% endif %}"
 
 
 
 
 
66
  ```
67
 
68
  > ### Note
@@ -93,8 +98,11 @@ We provide the pre-quantized EXAONE Deep models with **AWQ** and several quantiz
93
 
94
  To achieve the expected performance, we recommend using the following configurations:
95
 
96
- 1. Ensure the model starts with `<thought>\n` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section.
97
- 2. The reasoning steps of EXAONE Deep models enclosed by `<thought>\n...\n</thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically.
 
 
 
98
  3. Avoid using system prompt, and build the instruction on the user prompt.
99
  4. Additional instructions help the models reason more deeply, so that the models generate better output.
100
  - For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful.
 
1
  ---
2
  base_model: LGAI-EXAONE/EXAONE-Deep-7.8B
 
 
 
 
3
  language:
4
  - en
5
  - ko
6
+ library_name: transformers
7
+ license: other
8
+ license_name: exaone
9
+ license_link: LICENSE
10
+ pipeline_tag: text-generation
11
  tags:
12
  - lg-ai
13
  - exaone
14
  - exaone-deep
15
+ base_model_relation: quantized
 
16
  ---
17
 
18
  <p align="center">
 
62
  --temp 0.6 \
63
  --top-p 0.95 \
64
  --jinja \
65
+ --chat-template "{% for message in messages %}{% if loop.first and message['role'] != 'system' %}{{ '[|system|][|endofturn|]
66
+ ' }}{% endif %}{% set content = message['content'] %}{% if '</thought>' in content %}{% set content = content.split('</thought>')[-1].lstrip('\
67
+ ') %}{% endif %}{{ '[|' + message['role'] + '|]' + content }}{% if not message['role'] == 'user' %}{{ '[|endofturn|]' }}{% endif %}{% if not loop.last %}{{ '
68
+ ' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '
69
+ [|assistant|]<thought>
70
+ ' }}{% endif %}"
71
  ```
72
 
73
  > ### Note
 
98
 
99
  To achieve the expected performance, we recommend using the following configurations:
100
 
101
+ 1. Ensure the model starts with `<thought>
102
+ ` for reasoning steps. The model's output quality may be degraded when you omit it. You can easily apply this feature by using `tokenizer.apply_chat_template()` with `add_generation_prompt=True`. Please check the example code on [Quickstart](#quickstart) section.
103
+ 2. The reasoning steps of EXAONE Deep models enclosed by `<thought>
104
+ ...
105
+ </thought>` usually have lots of tokens, so previous reasoning steps may be necessary to be removed in multi-turn situation. The provided tokenizer handles this automatically.
106
  3. Avoid using system prompt, and build the instruction on the user prompt.
107
  4. Additional instructions help the models reason more deeply, so that the models generate better output.
108
  - For math problems, the instructions **"Please reason step by step, and put your final answer within \boxed{}."** are helpful.