Edit model card

Open-Assistant Llama2 70B SFT OASST

This model is a fine-tuning of Llama2 70B LLM. It was trained on a mixture of OASST top-1 threads.

Model Details

  • Finetuned from: Llama2 70B
  • Model type: Causal decoder-only transformer language model
  • Language: English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
  • License: Apache 2.0
  • Contact: Open-Assistant Discord

Prompting

Two special tokens are used to mark the beginning of user and assistant turns: <|prompter|> and <|assistant|>. Each turn ends with a </s> token.

Input prompt example:

<|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|>

The input ends with the <|assistant|> token to signal that the model should start generating the assistant reply.

Citation

@misc{jordiclive_llama2_70b_oasst_1_200,
  title={{Open-Assistant Llama2 70B SFT OASST}},
  author={{Jordan Clive}},
  howpublished={\url{https://huggingface.co/jordiclive/Llama-2-70b-oasst-1-200}},
  year={2023},
  note={Apache 2.0 License. Finetuned on OASST top-1 threads. Languages supported: English, German, Spanish, French.},
  url={https://huggingface.co/jordiclive/Llama-2-70b-oasst-1-200},
}
Downloads last month
1,482
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for jordiclive/Llama-2-70b-oasst-1-200

Finetunes
1 model
Quantizations
3 models

Dataset used to train jordiclive/Llama-2-70b-oasst-1-200

Spaces using jordiclive/Llama-2-70b-oasst-1-200 21