Translation
mistral

Languages:

Languages Abbr. Languages Abbr. Languages Abbr. Languages Abbr.
Arabic ar French fr Malay ms Russian ru
Czech cs Croatian hr Norwegian Bokmal nb Swedish sv
Danish da Hungarian hu Dutch nl Thai th
German de Indonesian id Norwegian no Turkish tr
English en Italian it Polish pl Ukrainian uk
Spanish es Japanese ja Portuguese pt Vietnamese vi
Finnish fi Korean ko Romanian ro Chinese zh

Example code:

import ctranslate2
import transformers

generator = ctranslate2.Generator("valamiasd/Seed-X-PPO-7B-ct2-int8 ", device="cuda")
tokenizer = transformers.AutoTokenizer.from_pretrained("valamiasd/Seed-X-PPO-7B-ct2-int8")

prompts = [
    "The way the start tokens are forwarded in the decoder depends on the argument.",
    "Batch of start tokens. If the decoder starts from a special start token like.",
    "Qwen Image Edit + ControlNet Openpose is possible?"
]

preprocessedprompts = []
for tp in prompts:
    preprocessedprompts.append(f"Translate the following English sentence into Hungarian: {tp} <hu>")

tokenized_prompts = []
for p in preprocessedprompts:
    tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(p))
    tokenized_prompts.append(tokens)

results = generator.generate_batch(tokenized_prompts, beam_size=4, include_prompt_in_result=False)

print(res)
for i, result in enumerate(results):
    output = tokenizer.decode(result.sequences_ids[0])
    if output.startswith('<s>'):
        output = output[4:]  
    output = output.strip()  
    print(f"Translation {i+1}: {output}")
Downloads last month
15
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for valamiasd/Seed-X-PPO-7B-ct2-int8

Finetuned
(2)
this model

Datasets used to train valamiasd/Seed-X-PPO-7B-ct2-int8