AnatoliiPotapov merve HF Staff commited on
Commit
a826dfc
·
verified ·
1 Parent(s): cb2eb8c

Improve metadata 🤗 (#3)

Browse files

- Improve metadata 🤗 (4f7e80456b886c76754df594144ec7caf9d0f199)


Co-authored-by: merve <[email protected]>

Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -4,6 +4,8 @@ language:
4
  license: apache-2.0
5
  base_model:
6
  - Qwen/Qwen3-32B
 
 
7
  ---
8
  # T-pro-it-2.0
9
 
@@ -297,4 +299,4 @@ T-pro-it-2.0 natively supports a context length of 32,768 tokens.
297
  For conversations where the input significantly exceeds this limit, follow the recommendations from the [Qwen3 model card](https://huggingface.co/Qwen/Qwen3-235B-A22B#processing-long-texts) on processing long texts.
298
 
299
  For example, in SGLang, you can enable 128K context support with the following command:
300
- `llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768`
 
4
  license: apache-2.0
5
  base_model:
6
  - Qwen/Qwen3-32B
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
  ---
10
  # T-pro-it-2.0
11
 
 
299
  For conversations where the input significantly exceeds this limit, follow the recommendations from the [Qwen3 model card](https://huggingface.co/Qwen/Qwen3-235B-A22B#processing-long-texts) on processing long texts.
300
 
301
  For example, in SGLang, you can enable 128K context support with the following command:
302
+ `llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768`