ShittyTranslator 💩

Is a fine-tuned SmolLM2-135M model that deliberately produces comically bad translations by emulating the "lost-in-translation" effect of multiple consecutive machine translations.

I've always wanted something like this, it's what the world needs 🙏

Example Output

Translation Example

Model Description

Base model: SmolLM2-135M

I let it translate its own description...

Original: ShittyTranslator is a language model fine-tuned on a custom dataset of intentionally poor translations. It simulates the humorous results you get when using machine translation services (like the big G Translate) to translate text from English through 4x random languages and then back to English.

Translated: Shetty has been trained by the Shatty Translators to translate English into other languages. Himu has implemented a method that can be used with the online tool for performing automated translation of texts in various linguistic contexts and return it as English-based content, like He's had written him about how he wrote.

Using with Ollama

Simply run ollama create name-of-your-model -f Modelfile

Modelfile defaults

FROM ./ShittyTranslator_f16.gguf

TEMPLATE """Original:{{ .Prompt }}
Translated:"""

PARAMETER num_predict 128
PARAMETER temperature 1.1
PARAMETER top_p 0.9

Notes

  • ⚠️ It can output profanity and vulgar language, so viewer discretion is advised!
  • It's intended for English input and output, and any other language input might result in a complete loss of context.
  • Speaking of context, when running inference, make sure to clear the context each run to mimic "real" translations.
  • It works best if you translate sentence by sentence... Working on making it better for larger bodies of text.
  • It's not perfect, and might sometimes end sentences on weird characters, as well as add newlines and other weirdness.
  • The art of a perfect shitty translation is hard, and so is making a smol model do them for you.
  • If you make something funny like a video game translation (my fav), please let me know!

Citation

Original SmolLM2 Citation

@misc{allal2025smollm2smolgoesbig,
      title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model}, 
      author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
      year={2025},
      eprint={2502.02737},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.02737}, 
}
Downloads last month
30
GGUF
Model size
135M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for defnic/ShittyTranslator-GGUF

Finetuned
(518)
this model