Trained for 2 epochs on NilanE/ParallelFiction-Ja_En-100k using QLoRA. CPO tune is in-progress.

Input should be 500-1000 tokens long. Make sure to set 'do_sample = False' if using HF transformers for inference, or otherwise set temperature to 0 for deterministic outputs.

Prompt format:

Translate this from Japanese to English:
### JAPANESE:
{source_text}
### ENGLISH:

Footnote:

This is an independantly-developed project. If anyone is interested in sponsoring further research please contact [email protected]. Questions about model usage can be asked in the disscussion tab.

Downloads last month
17
Safetensors
Model size
1.1B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for NilanE/tinyllama-en_ja-translation-v3

Finetuned
(2)
this model

Dataset used to train NilanE/tinyllama-en_ja-translation-v3