T-pro-it-2.0

🚨 Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.

Description

T-pro-it-2.0 is a model built upon the Qwen 3 model family and incorporates both continual pre-training and alignment techniques.

πŸ“š Dataset

Instruction Pre-Training: 40B tokens of instruction data, with one-third focused on reasoning tasks.

Supervised Fine-Tuning (SFT): ~500K high-quality and diverse instructions with balanced complexity. Reasoning tasks make up about 20% of the dataset.

Preference Tuning: ~100K carefully selected instructions, filtered by length and type for general tasks and with domain-balanced selection for reasoning tasks.

πŸ“Š Benchmarks

Model MERA ruMMLU Ru Arena Hard ru AIME 2025 ru LCB
T-pro 2.0 0.660 0.790 0.876 0.646 0.563
Qwen 3 32B 0.584 0.740 0.836 0.625 0.537
Ruadapt 3 32B V2 0.574 0.737 0.660 0.450 0.500
DeepSeek-R1-Distill-Qwen-32B 0.508 0.702 0.426 0.402 0.493
Gemma 3 27B 0.577 0.695 0.759 0.231 0.261

Switching Between Thinking and Non‑Thinking Modes

To enable or disable reasoning mode in HuggingFace, set the enable_thinking flag in tokenizer.apply_chat_template.
For more details, see:


Recommended Generation Parameters

Mode Temperature presence_penalty
No‑think (general requests) ≀ 0.3 1.0
Think mode (standard requests) β‰ˆβ€―0.6 1.0
Complex reasoning requests β‰₯β€―0.8 1.0
  • Hybrid reasoning models need careful tuning of sampling hyperparameters, which vary by domain.
  • Use lower temperature for straightforward queries and higher temperature for complex 'think-mode' tasks.
  • A presence_penalty between 0 and 2 can help avoid repetitive outputs.

πŸ‘¨β€πŸ’» Examples of usage

SGLang Usage

For better quality and stable performance, we recommend SGLang as your inference framework.

To run an inference server for T-pro-it-2.0, start by launching the SGLang server:

python -m sglang.launch_server \
    --model-path t-tech/T-pro-it-2.0 \
    --reasoning-parser qwen3

Once the server is up and listening on localhost:30000, you can send chat-based requests via the OpenAI Python client.

import openai

client = openai.OpenAI(
    base_url="http://127.0.0.1:30000/v1",
    api_key="ANY"  # the server ignores the API key
)

prompt = (
    "ΠŸΠΎΠΆΠ°Π»ΡƒΠΉΡΡ‚Π°, вычисли ΠΎΠΏΡ€Π΅Π΄Π΅Π»Ρ‘Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» ∫_0^1 xΒ²β€―eΛ£β€―dx, "
    "пошагово объясни Ρ€Π΅ΡˆΠ΅Π½ΠΈΠ΅ ΠΈ ΡƒΠΊΠ°ΠΆΠΈ ΠΎΠΊΠΎΠ½Ρ‡Π°Ρ‚Π΅Π»ΡŒΠ½Ρ‹ΠΉ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚."
)

completion = client.chat.completions.create(
    model="ANY",  # the server ignores the model name
    messages=[
        {"role": "system", "content": "Π’Ρ‹ T-pro, Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ ассистСнт Π² Π’-Π’Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ. Ввоя Π·Π°Π΄Π°Ρ‡Π° - Π±Ρ‹Ρ‚ΡŒ ΠΏΠΎΠ»Π΅Π·Π½Ρ‹ΠΌ Π΄ΠΈΠ°Π»ΠΎΠ³ΠΎΠ²Ρ‹ΠΌ ассистСнтом."},
        {"role": "user", "content": prompt}
    ],
    # REQUIRED: sampling params from the "Recommended Generation Parameters" table
    temperature=0.6,
    presence_penalty=1.0,
)

# The generated reply is in `completion.choices[0].message.content`
print(completion.choices[0].message.content)

Note: It is obligatory to include both temperature and presence_penalty in every completion call.

HF Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
torch.manual_seed(42)

model_name = "t-tech/T-pro-it-2.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name, 
    torch_dtype="auto",
    device_map="auto",
)

prompt = (
    "ΠŸΠΎΠΆΠ°Π»ΡƒΠΉΡΡ‚Π°, вычисли ΠΎΠΏΡ€Π΅Π΄Π΅Π»Ρ‘Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» ∫_0^1 xΒ²β€―eΛ£β€―dx, "
    "пошагово объясни Ρ€Π΅ΡˆΠ΅Π½ΠΈΠ΅ ΠΈ ΡƒΠΊΠ°ΠΆΠΈ ΠΎΠΊΠΎΠ½Ρ‡Π°Ρ‚Π΅Π»ΡŒΠ½Ρ‹ΠΉ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚."
)
messages = [
    {"role": "system", "content": "Π’Ρ‹ T-pro, Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ ассистСнт Π² Π’-Π’Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ. Ввоя Π·Π°Π΄Π°Ρ‡Π° - Π±Ρ‹Ρ‚ΡŒ ΠΏΠΎΠ»Π΅Π·Π½Ρ‹ΠΌ Π΄ΠΈΠ°Π»ΠΎΠ³ΠΎΠ²Ρ‹ΠΌ ассистСнтом."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)

Output:

<think>
Π₯ΠΎΡ€ΠΎΡˆΠΎ, ΠΌΠ½Π΅ Π½ΡƒΠΆΠ½ΠΎ Π²Ρ‹Ρ‡ΠΈΡΠ»ΠΈΡ‚ΡŒ ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» ΠΎΡ‚ 0 Π΄ΠΎ 1 Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ xΒ² * e^x dx. Π― помню, Ρ‡Ρ‚ΠΎ для ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π»ΠΎΠ² Ρ‚Π°ΠΊΠΎΠ³ΠΎ Π²ΠΈΠ΄Π°, Π³Π΄Π΅ Π΅ΡΡ‚ΡŒ ΠΏΡ€ΠΎΠΈΠ·Π²Π΅Π΄Π΅Π½ΠΈΠ΅ ΠΌΠ½ΠΎΠ³ΠΎΡ‡Π»Π΅Π½Π° ΠΈ экспонСнты, ΠΎΠ±Ρ‹Ρ‡Π½ΠΎ ΠΏΡ€ΠΈΠΌΠ΅Π½ΡΡŽΡ‚ ΠΌΠ΅Ρ‚ΠΎΠ΄ интСгрирования ΠΏΠΎ частям. Π”Π°Π²Π°ΠΉΡ‚Π΅ вспомню Ρ„ΠΎΡ€ΠΌΡƒΠ»Ρƒ интСгрирования ΠΏΠΎ частям: ∫u dv = uv - ∫v du. 

ΠŸΠ΅Ρ€Π²Ρ‹ΠΌ Π΄Π΅Π»ΠΎΠΌ Π½ΡƒΠΆΠ½ΠΎ Π²Ρ‹Π±Ρ€Π°Ρ‚ΡŒ, Ρ‡Ρ‚ΠΎ Π²Π·ΡΡ‚ΡŒ Π·Π° u, Π° Ρ‡Ρ‚ΠΎ Π·Π° dv. ΠžΠ±Ρ‹Ρ‡Π½ΠΎ Π² Ρ‚Π°ΠΊΠΈΡ… случаях ΠΌΠ½ΠΎΠ³ΠΎΡ‡Π»Π΅Π½ (Π² Π΄Π°Π½Π½ΠΎΠΌ случаС xΒ²) Π±Π΅Ρ€ΡƒΡ‚ Π·Π° u, ΠΏΠΎΡ‚ΠΎΠΌΡƒ Ρ‡Ρ‚ΠΎ ΠΏΡ€ΠΈ Π΄ΠΈΡ„Ρ„Π΅Ρ€Π΅Π½Ρ†ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠΈ Π΅Π³ΠΎ ΡΡ‚Π΅ΠΏΠ΅Π½ΡŒ пониТаСтся, ΠΈ Π² ΠΊΠΎΠ½Ρ†Π΅ ΠΊΠΎΠ½Ρ†ΠΎΠ² ΠΎΠ½ прСвратится Π² константу, Ρ‡Ρ‚ΠΎ упростит ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π». Π’ΠΎΠ³Π΄Π° dv Π±ΡƒΠ΄Π΅Ρ‚ ΠΎΡΡ‚Π°Π²ΡˆΠ°ΡΡΡ Ρ‡Π°ΡΡ‚ΡŒ, Ρ‚ΠΎ Π΅ΡΡ‚ΡŒ e^x dx. 

Π˜Ρ‚Π°ΠΊ, ΠΏΡƒΡΡ‚ΡŒ u = xΒ², Ρ‚ΠΎΠ³Π΄Π° du = 2x dx. А dv = e^x dx, Π·Π½Π°Ρ‡ΠΈΡ‚, v = ∫e^x dx = e^x. 

Π’Π΅ΠΏΠ΅Ρ€ΡŒ ΠΏΡ€ΠΈΠΌΠ΅Π½ΠΈΠΌ Ρ„ΠΎΡ€ΠΌΡƒΠ»Ρƒ интСгрирования ΠΏΠΎ частям: ∫xΒ² e^x dx = uv - ∫v du = xΒ² e^x - ∫e^x * 2x dx. 

Π’Π΅ΠΏΠ΅Ρ€ΡŒ Ρƒ нас остался ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» ∫2x e^x dx. Π•Π³ΠΎ Ρ‚ΠΎΠΆΠ΅ Π½ΡƒΠΆΠ½ΠΎ Π²Π·ΡΡ‚ΡŒ ΠΏΠΎ частям. ΠžΠΏΡΡ‚ΡŒ Π²Ρ‹Π±ΠΈΡ€Π°Π΅ΠΌ u ΠΈ dv. Π’Π΅ΠΏΠ΅Ρ€ΡŒ u = 2x, Ρ‚ΠΎΠ³Π΄Π° du = 2 dx, Π° dv = e^x dx, Π·Π½Π°Ρ‡ΠΈΡ‚, v = e^x. 

ΠŸΡ€ΠΈΠΌΠ΅Π½ΡΠ΅ΠΌ Ρ„ΠΎΡ€ΠΌΡƒΠ»Ρƒ снова: ∫2x e^x dx = 2x e^x - ∫e^x * 2 dx = 2x e^x - 2∫e^x dx. 

Π˜Π½Ρ‚Π΅Π³Ρ€Π°Π» ∫e^x dx Ρ€Π°Π²Π΅Π½ e^x + C. ΠŸΠΎΠ΄ΡΡ‚Π°Π²Π»ΡΠ΅ΠΌ ΠΎΠ±Ρ€Π°Ρ‚Π½ΠΎ: 2x e^x - 2e^x + C. 

Π’Π΅ΠΏΠ΅Ρ€ΡŒ возвращаСмся ΠΊ исходному ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π»Ρƒ: xΒ² e^x - (2x e^x - 2e^x) + C = xΒ² e^x - 2x e^x + 2e^x + C. 

Π’Π΅ΠΏΠ΅Ρ€ΡŒ Π½ΡƒΠΆΠ½ΠΎ Π²Ρ‹Ρ‡ΠΈΡΠ»ΠΈΡ‚ΡŒ ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» ΠΎΡ‚ 0 Π΄ΠΎ 1. Для этого подставим Π²Π΅Ρ€Ρ…Π½ΠΈΠΉ ΠΈ Π½ΠΈΠΆΠ½ΠΈΠΉ ΠΏΡ€Π΅Π΄Π΅Π»Ρ‹ Π² ΠΏΠ΅Ρ€Π²ΠΎΠΎΠ±Ρ€Π°Π·Π½ΡƒΡŽ ΠΈ Π²Ρ‹Ρ‡Ρ‚Π΅ΠΌ. 

Π‘Π½Π°Ρ‡Π°Π»Π° подставим x = 1: (1Β² * e^1) - 2*1*e^1 + 2*e^1 = e - 2e + 2e = (1 - 2 + 2)e = 1*e = e. 

Π’Π΅ΠΏΠ΅Ρ€ΡŒ подставим x = 0: (0Β² * e^0) - 2*0*e^0 + 2*e^0 = 0 - 0 + 2*1 = 2. 

Π’Ρ‹Ρ‡ΠΈΡ‚Π°Π΅ΠΌ: e - 2. 

ΠŸΡ€ΠΎΠ²Π΅Ρ€ΡŽ, Π½Π΅ ошибся Π»ΠΈ я Π² вычислСниях. ΠŸΡ€ΠΈ подстановкС x=1: 1*e - 2*1*e + 2*e = e - 2e + 2e = e. Π”Π°, Π²Π΅Ρ€Π½ΠΎ. ΠŸΡ€ΠΈ x=0: 0 - 0 + 2*1 = 2. Π’ΠΎΠ³Π΄Π° Ρ€Π°Π·Π½ΠΎΡΡ‚ΡŒ e - 2. 

Но ΠΏΠΎΠ΄ΠΎΠΆΠ΄ΠΈ, ΠΌΠΎΠΆΠ΅Ρ‚ Π±Ρ‹Ρ‚ΡŒ я Π³Π΄Π΅-Ρ‚ΠΎ пропустил Π·Π½Π°ΠΊ? Π’ ΠΏΠ΅Ρ€Π²ΠΎΠΎΠ±Ρ€Π°Π·Π½ΠΎΠΉ послС интСгрирования ΠΏΠΎ частям: xΒ² e^x - 2x e^x + 2e^x. Π”Π°, ΠΏΡ€Π°Π²ΠΈΠ»ΡŒΠ½ΠΎ. Π’ΠΎΠ³Π΄Π° ΠΏΡ€ΠΈ подстановкС 1: 1*e - 2*1*e + 2*e = (1 - 2 + 2)e = 1*e. ΠŸΡ€ΠΈ 0: 0 - 0 + 2*1 = 2. Π’ΠΎΠ³Π΄Π° ΠΎΡ‚Π²Π΅Ρ‚ e - 2. 

Но e ΠΏΡ€ΠΈΠΌΠ΅Ρ€Π½ΠΎ Ρ€Π°Π²Π½ΠΎ 2.718, Π·Π½Π°Ρ‡ΠΈΡ‚, e - 2 β‰ˆ 0.718. Π˜Π½Ρ‚ΡƒΠΈΡ‚ΠΈΠ²Π½ΠΎ каТСтся, Ρ‡Ρ‚ΠΎ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» ΠΎΡ‚ 0 Π΄ΠΎ 1 xΒ² e^x Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±Ρ‹Ρ‚ΡŒ ΠΏΠΎΠ»ΠΎΠΆΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹ΠΌ, Ρ‡Ρ‚ΠΎ соотвСтствуСт. 

ΠœΠΎΠΆΠ΅Ρ‚, стоит ΠΏΡ€ΠΎΠ²Π΅Ρ€ΠΈΡ‚ΡŒ Ρ‡Π΅Ρ€Π΅Π· Π΄Ρ€ΡƒΠ³ΠΎΠΉ ΠΌΠ΅Ρ‚ΠΎΠ΄ ΠΈΠ»ΠΈ Π΄ΠΈΡ„Ρ„Π΅Ρ€Π΅Π½Ρ†ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΠΏΠ΅Ρ€Π²ΠΎΠΎΠ±Ρ€Π°Π·Π½ΠΎΠΉ? Π’ΠΎΠ·ΡŒΠΌΠ΅ΠΌ ΠΏΡ€ΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΡƒΡŽ ΠΎΡ‚ xΒ² e^x - 2x e^x + 2e^x. 

ΠŸΡ€ΠΎΠΈΠ·Π²ΠΎΠ΄Π½Π°Ρ: d/dx [xΒ² e^x] = 2x e^x + xΒ² e^x. 

d/dx [-2x e^x] = -2 e^x - 2x e^x. 

d/dx [2e^x] = 2 e^x. 

Π‘ΠΊΠ»Π°Π΄Ρ‹Π²Π°Π΅ΠΌ всС: 2x e^x + xΒ² e^x - 2 e^x - 2x e^x + 2 e^x = xΒ² e^x. 

Π”Π°, производная Ρ€Π°Π²Π½Π° ΠΏΠΎΠ΄Ρ‹Π½Ρ‚Π΅Π³Ρ€Π°Π»ΡŒΠ½ΠΎΠΉ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ. Π—Π½Π°Ρ‡ΠΈΡ‚, пСрвообразная Π½Π°ΠΉΠ΄Π΅Π½Π° Π²Π΅Ρ€Π½ΠΎ. Π’ΠΎΠ³Π΄Π° ΠΎΠΏΡ€Π΅Π΄Π΅Π»Π΅Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» Ρ€Π°Π²Π΅Π½ e - 2. 

Π’Ρ€ΠΎΠ΄Π΅ всС ΠΏΡ€Π°Π²ΠΈΠ»ΡŒΠ½ΠΎ. Π’ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎ, ΠΌΠΎΠΆΠ½ΠΎ Π±Ρ‹Π»ΠΎ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Ρ€Π΅ΠΊΡƒΡ€Ρ€Π΅Π½Ρ‚Π½ΡƒΡŽ Ρ„ΠΎΡ€ΠΌΡƒΠ»Ρƒ ΠΈΠ»ΠΈ Ρ‚Π°Π±Π»ΠΈΡ†Ρƒ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π»ΠΎΠ², Π½ΠΎ ΠΌΠ΅Ρ‚ΠΎΠ΄ ΠΏΠΎ частям сработал. 

Π˜Ρ‚Π°ΠΊ, ΠΎΠΊΠΎΠ½Ρ‡Π°Ρ‚Π΅Π»ΡŒΠ½Ρ‹ΠΉ ΠΎΡ‚Π²Π΅Ρ‚: e - 2.
</think>

Для вычислСния ΠΎΠΏΡ€Π΅Π΄Π΅Π»Ρ‘Π½Π½ΠΎΠ³ΠΎ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π»Π°  
∫01x2ex dx
\int_0^1 x^2 e^x \, dx
  
ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌ **ΠΌΠ΅Ρ‚ΠΎΠ΄ интСгрирования ΠΏΠΎ частям**.  

### Π¨Π°Π³ 1: ΠŸΡ€ΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠ΅ интСгрирования ΠΏΠΎ частям  
Π€ΠΎΡ€ΠΌΡƒΠ»Π° интСгрирования ΠΏΠΎ частям:  
∫u dv=uvβˆ’βˆ«v du.
\int u \, dv = uv - \int v \, du.
  
Π’Ρ‹Π±ΠΈΡ€Π°Π΅ΠΌ:  
- $ u = x^2 $, Ρ‚ΠΎΠ³Π΄Π° $ du = 2x \, dx $,  
- $ dv = e^x \, dx $, Ρ‚ΠΎΠ³Π΄Π° $ v = e^x $.  

ΠŸΠΎΠ΄ΡΡ‚Π°Π²Π»ΡΠ΅ΠΌ Π² Ρ„ΠΎΡ€ΠΌΡƒΠ»Ρƒ:  
∫x2ex dx=x2exβˆ’βˆ«exβ‹…2x dx=x2exβˆ’2∫xex dx.
\int x^2 e^x \, dx = x^2 e^x - \int e^x \cdot 2x \, dx = x^2 e^x - 2 \int x e^x \, dx.
  

### Π¨Π°Π³ 2: Π˜Π½Ρ‚Π΅Π³Ρ€ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΎΡΡ‚Π°Π²ΡˆΠ΅Π³ΠΎΡΡ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π»Π°  
Π’Π΅ΠΏΠ΅Ρ€ΡŒ вычисляСм $ \int x e^x \, dx $, снова примСняя ΠΈΠ½Ρ‚Π΅Π³Ρ€ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΏΠΎ частям:  
- $ u = x $, Ρ‚ΠΎΠ³Π΄Π° $ du = dx $,  
- $ dv = e^x \, dx $, Ρ‚ΠΎΠ³Π΄Π° $ v = e^x $.  

ΠŸΠΎΠ»ΡƒΡ‡Π°Π΅ΠΌ:  
∫xex dx=xexβˆ’βˆ«ex dx=xexβˆ’ex+C.
\int x e^x \, dx = x e^x - \int e^x \, dx = x e^x - e^x + C.
  

ΠŸΠΎΠ΄ΡΡ‚Π°Π²Π»ΡΠ΅ΠΌ ΠΎΠ±Ρ€Π°Ρ‚Π½ΠΎ:  
∫x2ex dx=x2exβˆ’2(xexβˆ’ex)+C=x2exβˆ’2xex+2ex+C.
\int x^2 e^x \, dx = x^2 e^x - 2 \left( x e^x - e^x \right) + C = x^2 e^x - 2x e^x + 2 e^x + C.
  

### Π¨Π°Π³ 3: ВычислСниС ΠΎΠΏΡ€Π΅Π΄Π΅Π»Ρ‘Π½Π½ΠΎΠ³ΠΎ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π»Π°  
ΠŸΠΎΠ΄ΡΡ‚Π°Π²Π»ΡΠ΅ΠΌ ΠΏΡ€Π΅Π΄Π΅Π»Ρ‹ $ 0 $ ΠΈ $ 1 $:  
[x2exβˆ’2xex+2ex]01=(12e1βˆ’2β‹…1β‹…e1+2e1)βˆ’(02e0βˆ’2β‹…0β‹…e0+2e0).
\left[ x^2 e^x - 2x e^x + 2 e^x \right]_0^1 = \left( 1^2 e^1 - 2 \cdot 1 \cdot e^1 + 2 e^1 \right) - \left( 0^2 e^0 - 2 \cdot 0 \cdot e^0 + 2 e^0 \right).
  

Π£ΠΏΡ€ΠΎΡ‰Π°Π΅ΠΌ:  
- ΠŸΡ€ΠΈ $ x = 1 $:  
  $$
  e - 2e + 2e = e.
  $$  
- ΠŸΡ€ΠΈ $ x = 0 $:  
  $$
  0 - 0 + 2 \cdot 1 = 2.
  $$  

Π˜Ρ‚ΠΎΠ³ΠΎΠ²Ρ‹ΠΉ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚:  
eβˆ’2.
e - 2.
  

### ΠžΡ‚Π²Π΅Ρ‚:  
eβˆ’2
\boxed{e - 2}

VLLM Usage

from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

model_name = "t-tech/T-pro-it-2.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, max_model_len=8192)
sampling_params = SamplingParams(temperature=0.7,
                                repetition_penalty=1.05,
                                top_p=0.8, top_k=70,
                                max_tokens=512)

prompt = (
    "ΠŸΠΎΠΆΠ°Π»ΡƒΠΉΡΡ‚Π°, вычисли ΠΎΠΏΡ€Π΅Π΄Π΅Π»Ρ‘Π½Π½Ρ‹ΠΉ ΠΈΠ½Ρ‚Π΅Π³Ρ€Π°Π» ∫_0^1 xΒ²β€―eΛ£β€―dx, "
    "пошагово объясни Ρ€Π΅ΡˆΠ΅Π½ΠΈΠ΅ ΠΈ ΡƒΠΊΠ°ΠΆΠΈ ΠΎΠΊΠΎΠ½Ρ‡Π°Ρ‚Π΅Π»ΡŒΠ½Ρ‹ΠΉ Ρ€Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚."
)
messages = [
    {"role": "system", "content": "Π’Ρ‹ T-pro, Π²ΠΈΡ€Ρ‚ΡƒΠ°Π»ΡŒΠ½Ρ‹ΠΉ ассистСнт Π² Π’-Π’Π΅Ρ…Π½ΠΎΠ»ΠΎΠ³ΠΈΠΈ. Ввоя Π·Π°Π΄Π°Ρ‡Π° - Π±Ρ‹Ρ‚ΡŒ ΠΏΠΎΠ»Π΅Π·Π½Ρ‹ΠΌ Π΄ΠΈΠ°Π»ΠΎΠ³ΠΎΠ²Ρ‹ΠΌ ассистСнтом."},
    {"role": "user", "content": prompt}
]

prompt_token_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True)

outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)

generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)

Long Context Usage

T-pro-it-2.0 natively supports a context length of 32,768 tokens.
For conversations where the input significantly exceeds this limit, follow the recommendations from the Qwen3 model card on processing long texts.

For example, in SGLang, you can enable 128K context support with the following command:
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768

Downloads last month
1,405
Safetensors
Model size
32.8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 6 Ask for provider support

Model tree for t-tech/T-pro-it-2.0

Base model

Qwen/Qwen3-32B
Finetuned
(44)
this model
Quantizations
4 models

Space using t-tech/T-pro-it-2.0 1

Collection including t-tech/T-pro-it-2.0