jayalakshmiK commited on
Commit
be4dad9
·
verified ·
1 Parent(s): 2aab118

Upload deduplicated_text_data.txt

Browse files
Files changed (1) hide show
  1. deduplicated_text_data.txt +25 -0
deduplicated_text_data.txt ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Maxime Lbonne August 7, 2024 Orca is a 13B parameter LLM with ChatGPT level of performance thanks to a huge dataset of 5M samples with step-by-step explanations.
2
+ 📝 Paper: https://arxiv.org/abs/2306.02707 The model will probably never be released by Microsoft, but open-source projects try to replicate it (OpenOrca, Dolphin).
3
+ The authors note that while Vicuna-13B display excellent performance when evaluated with GPT-4, it performs quite poorly on reasoning benchmarks like SAT, LSAT, GRE, GMAT.
4
+ Self-Instruct involves using an initial set of prompts to ask an LLM to create new instructions.
5
+ Low-quality or overly similar responses are removed, and the remaining instructions are recycled back into the task pool for further iterations.
6
+ However, the queries generated via Self-Instruct can lack diversity and complexity.
7
+ Problem with natural conversations: LLMs like Vicuna capture the style but not the reasoning process.
8
+ This motivates the creation of a dataset with step-by-step explanations.
9
+ Using GPT-4 for auto-evaluation has several drawbacks, such as limited test set sizes (for example, 80 instructions in Vicuna and 218 in WizardLM) and the inherent biases of GPT-4.
10
+ It tends to favor models that are instruction-tuned with its own responses, resulting in a preference for longer texts over shorter ones.
11
+ It also exhibits a bias in the order of candidate responses and overestimates the abilities of smaller models.
12
+ Contributions: The authors focus a lot on system instructions and how they can be used to guide the model into adopting the right tone, task, and format.
13
+ I believe the same effect can be achieved with user instructions (maybe system instructions are slightly more accurate?).
14
+ System instructions are sampled from a diverse instruction set including chain-of-thought reasoning steps, explain like I’m five, being helpful and informative, etc.
15
+ Each training sample is a triplet with system message, user message, and response.
16
+ The authors use the FLAN-v2 dataset as raw data.
17
+ The FLAN-v2 Collection consists of five sub-collections: CoT, NiV2, T0 (training only), Flan 2021, Dialogue: The resulting 5M samples are then used as inputs to generate high-quality responses with ChatGPT (5M) and GPT-4 (1M).
18
+ These models are prompted with the inputs + 16 handcrafted system messages to ensure different kinds of responses: This is motivated by curriculum learning (learning with a smaller model first, then with a big model) and technical reasons (cost, time).
19
+ They use the LLaMA BPE tokenizer with padding (vocabulary size = 32,001).
20
+ Multiple input examples are packed into a single sequence to maximize the used context length (2,048 tokens).
21
+ They use padding tokens to get a uniform size.
22
+ It was trained for 160h on 20xA100 GPUs (4 epochs) on the 5M ChatGPT-generated samples + 40h on the 1M GPT-4-generated samples.
23
+ Open-ended generation: Orca is significantly better than Vicuna.
24
+ AGIEval: Orca doesn’t perform as well as ChatGPT.
25
+ BigBench-Hard: Orca is on par with ChatGPT.