|
|
--- |
|
|
license: bigscience-bloom-rail-1.0 |
|
|
language: |
|
|
- es |
|
|
- en |
|
|
pipeline_tag: text-generation |
|
|
library_name: transformers |
|
|
tags: |
|
|
- alpaca |
|
|
- bloom |
|
|
- LLM |
|
|
datasets: |
|
|
- tatsu-lab/alpaca |
|
|
inference: false |
|
|
widget: |
|
|
- text: "Below is an instruction that describes a task, paired with an input that provides further context.\nWrite a response that appropriately completes the request.\n### Instruction:\nTell me about alpacas" |
|
|
--- |
|
|
|
|
|
<div style="text-align:center;width:250px;height:250px;"> |
|
|
<img src="https://huggingface.co/platzi/chivoom/resolve/main/chivoom_logo-removebg-preview.png"> |
|
|
</div> |
|
|
|
|
|
|
|
|
|
|
|
# Chivoom: Spanish Alpaca (Chiva) 馃悙 + BLOOM 馃挳 |
|
|
|
|
|
# IMPORTANT: This is just a PoC and still WIP! |
|
|
|
|
|
|
|
|
## Adapter Description |
|
|
This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOM 7B1** to be fine-tuned on the **Stanford's Alpaca Dataset** (translated to Spanish) by using the method **LoRA**. |
|
|
|
|
|
## Model Description |
|
|
BigScience Large Open-science Open-access Multilingual Language Model |
|
|
|
|
|
[BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1) |
|
|
|
|
|
## Training data |
|
|
|
|
|
We translated to Spanish the Alpaca dataset. |
|
|
|
|
|
Alpaca is a dataset of **52,000** instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better. |
|
|
|
|
|
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications: |
|
|
|
|
|
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`. |
|
|
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`. |
|
|
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation. |
|
|
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions. |
|
|
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct. |
|
|
|
|
|
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500). |
|
|
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl). |
|
|
|
|
|
### Training procedure |
|
|
|
|
|
TBA |
|
|
|
|
|
## How to use |
|
|
```py |
|
|
import torch |
|
|
from peft import PeftModel, PeftConfig |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig |
|
|
|
|
|
peft_model_id = "platzi/chivoom" |
|
|
config = PeftConfig.from_pretrained(peft_model_id) |
|
|
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map="auto") |
|
|
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-7b1") |
|
|
|
|
|
model = PeftModel.from_pretrained(model, peft_model_id) |
|
|
model.eval() |
|
|
|
|
|
# Based on the inference code by `tloen/alpaca-lora` |
|
|
def generate_prompt(instruction, input=None): |
|
|
if input: |
|
|
return f"""A continuaci贸n se muestra una instrucci贸n que describe una tarea, emparejada con una entrada que proporciona m谩s contexto. Escribe una respuesta que complete adecuadamente la petici贸n. |
|
|
### Instrucci贸n: |
|
|
{instruction} |
|
|
### Entrada: |
|
|
{input} |
|
|
### Respuesta:""" |
|
|
else: |
|
|
return f"""A continuaci贸n se muestra una instrucci贸n que describe una tarea. Escribe una respuesta que complete adecuadamente la petici贸n. |
|
|
### Instrucci贸n: |
|
|
{instruction} |
|
|
### Respuesta:""" |
|
|
|
|
|
def generate( |
|
|
instruction, |
|
|
input=None, |
|
|
temperature=0.1, |
|
|
top_p=0.75, |
|
|
top_k=40, |
|
|
num_beams=4, |
|
|
**kwargs, |
|
|
): |
|
|
prompt = generate_prompt(instruction, input) |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
input_ids = inputs["input_ids"].cuda() |
|
|
generation_config = GenerationConfig( |
|
|
temperature=temperature, |
|
|
top_p=top_p, |
|
|
top_k=top_k, |
|
|
num_beams=num_beams, |
|
|
**kwargs, |
|
|
) |
|
|
with torch.no_grad(): |
|
|
generation_output = model.generate( |
|
|
input_ids=input_ids, |
|
|
generation_config=generation_config, |
|
|
return_dict_in_generate=True, |
|
|
output_scores=True, |
|
|
max_new_tokens=256, |
|
|
) |
|
|
s = generation_output.sequences[0] |
|
|
output = tokenizer.decode(s) |
|
|
return output.split("### Response:")[1] |
|
|
|
|
|
instruction = "驴Qu茅 es un chivo?" |
|
|
|
|
|
print("Instrucci贸n:", instruction) |
|
|
print("Respuesta:", generate(instruction)) |
|
|
``` |
|
|
|