text
stringclasses 25
values |
---|
Maxime Lbonne August 7, 2024 Orca is a 13B parameter LLM with ChatGPT level of performance thanks to a huge dataset of 5M samples with step-by-step explanations.
|
📝 Paper: https://arxiv.org/abs/2306.02707 The model will probably never be released by Microsoft, but open-source projects try to replicate it (OpenOrca, Dolphin).
|
The authors note that while Vicuna-13B display excellent performance when evaluated with GPT-4, it performs quite poorly on reasoning benchmarks like SAT, LSAT, GRE, GMAT.
|
Self-Instruct involves using an initial set of prompts to ask an LLM to create new instructions.
|
Low-quality or overly similar responses are removed, and the remaining instructions are recycled back into the task pool for further iterations.
|
However, the queries generated via Self-Instruct can lack diversity and complexity.
|
Problem with natural conversations: LLMs like Vicuna capture the style but not the reasoning process.
|
This motivates the creation of a dataset with step-by-step explanations.
|
Using GPT-4 for auto-evaluation has several drawbacks, such as limited test set sizes (for example, 80 instructions in Vicuna and 218 in WizardLM) and the inherent biases of GPT-4.
|
It tends to favor models that are instruction-tuned with its own responses, resulting in a preference for longer texts over shorter ones.
|
It also exhibits a bias in the order of candidate responses and overestimates the abilities of smaller models.
|
Contributions: The authors focus a lot on system instructions and how they can be used to guide the model into adopting the right tone, task, and format.
|
I believe the same effect can be achieved with user instructions (maybe system instructions are slightly more accurate?).
|
System instructions are sampled from a diverse instruction set including chain-of-thought reasoning steps, explain like I’m five, being helpful and informative, etc.
|
Each training sample is a triplet with system message, user message, and response.
|
The authors use the FLAN-v2 dataset as raw data.
|
The FLAN-v2 Collection consists of five sub-collections: CoT, NiV2, T0 (training only), Flan 2021, Dialogue: The resulting 5M samples are then used as inputs to generate high-quality responses with ChatGPT (5M) and GPT-4 (1M).
|
These models are prompted with the inputs + 16 handcrafted system messages to ensure different kinds of responses: This is motivated by curriculum learning (learning with a smaller model first, then with a big model) and technical reasons (cost, time).
|
They use the LLaMA BPE tokenizer with padding (vocabulary size = 32,001).
|
Multiple input examples are packed into a single sequence to maximize the used context length (2,048 tokens).
|
They use padding tokens to get a uniform size.
|
It was trained for 160h on 20xA100 GPUs (4 epochs) on the 5M ChatGPT-generated samples + 40h on the 1M GPT-4-generated samples.
|
Open-ended generation: Orca is significantly better than Vicuna.
|
AGIEval: Orca doesn’t perform as well as ChatGPT.
|
BigBench-Hard: Orca is on par with ChatGPT.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.