Text Generation
Transformers
Safetensors
English
llama
finance
text-generation-inference
instruction-pretrain commited on
Commit
41d3649
·
verified ·
1 Parent(s): 66f81e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ datasets:
10
  - WizardLM/WizardLM_evol_instruct_V2_196k
11
  ---
12
  # Instruction Pre-Training: Language Models are Supervised Multitask Learners
13
- This repo contains the **finance model developed from Llama3-8B** in our paper **Instruction Pre-Training: Language Models are Supervised Multitask Learners**.
14
 
15
  We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. **In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.**
16
 
 
10
  - WizardLM/WizardLM_evol_instruct_V2_196k
11
  ---
12
  # Instruction Pre-Training: Language Models are Supervised Multitask Learners
13
+ This repo contains the **finance model developed from Llama3-8B** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
14
 
15
  We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. **In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.**
16