Datasets:
Update README.md
Browse filesAdd use case info, example code for fine-tuning Mystral 7b Instruct.
README.md
CHANGED
@@ -8,4 +8,84 @@ tags:
|
|
8 |
pretty_name: Presto/Athena Text to SQL Dataset
|
9 |
size_categories:
|
10 |
- 1K<n<10K
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
pretty_name: Presto/Athena Text to SQL Dataset
|
9 |
size_categories:
|
10 |
- 1K<n<10K
|
11 |
+
---
|
12 |
+
|
13 |
+
I created this dataset using [sqlglot](https://github.com/tobymao/sqlglot) to auto-convert the Spider and Wikisql datasets to Presto syntax, along with running some regex's for additional cleanup.
|
14 |
+
|
15 |
+
An example use case is fine-tuning an existing model to respond with Presto/Athena text-to-sql, if it performs well at standard SQL syntax used by the major text to sql training datasets.
|
16 |
+
|
17 |
+
Example of fine-tuning using this dataset (in this case for Mystral 7b Instruct):
|
18 |
+
|
19 |
+
```
|
20 |
+
import json
|
21 |
+
import pandas as pd
|
22 |
+
from datasets import Dataset
|
23 |
+
|
24 |
+
def read_jsonl(file_path):
|
25 |
+
data = []
|
26 |
+
with open(file_path, 'r', encoding='utf-8') as file:
|
27 |
+
for line in file:
|
28 |
+
json_data = json.loads(line.strip())
|
29 |
+
data.append(json_data)
|
30 |
+
return data
|
31 |
+
|
32 |
+
# Read the train and validation files
|
33 |
+
train_data = read_jsonl('training_data/train.jsonl') # use your own path to the training/validation data here
|
34 |
+
valid_data = read_jsonl('training_data/valid.jsonl')
|
35 |
+
|
36 |
+
# Convert to pandas DataFrame
|
37 |
+
train_df = pd.DataFrame(train_data)
|
38 |
+
valid_df = pd.DataFrame(valid_data)
|
39 |
+
|
40 |
+
# Convert DataFrame to Huggingface Dataset
|
41 |
+
train_dataset = Dataset.from_pandas(train_df)
|
42 |
+
valid_dataset = Dataset.from_pandas(valid_df)
|
43 |
+
|
44 |
+
# Example of processing
|
45 |
+
# train_texts = [example['text'] for example in train_dataset]
|
46 |
+
# valid_texts = [example['text'] for example in valid_dataset]
|
47 |
+
|
48 |
+
instruct_tune_dataset = {
|
49 |
+
"train": train_dataset,
|
50 |
+
"test": valid_dataset
|
51 |
+
}
|
52 |
+
|
53 |
+
...
|
54 |
+
|
55 |
+
def create_prompt(sample):
|
56 |
+
"""
|
57 |
+
Update the prompt template:
|
58 |
+
Combine both the prompt and input into a single column.
|
59 |
+
|
60 |
+
"""
|
61 |
+
bos_token = "<s>"
|
62 |
+
original_system_message = "table:"
|
63 |
+
system_message = "Use the provided input to create an instruction that could have been used to generate the response with an LLM. The query dialect is Athena/Presto. The database table used is: "
|
64 |
+
question_and_response = sample["text"].replace(original_system_message, "").replace("Q:", "\n\n### Input:\n").replace("A:","\n### Response:\n").strip()
|
65 |
+
eos_token = "</s>"
|
66 |
+
|
67 |
+
full_prompt = ""
|
68 |
+
full_prompt += bos_token
|
69 |
+
full_prompt += "### Instruction:"
|
70 |
+
full_prompt += "\n" + system_message
|
71 |
+
full_prompt += "\n" + question_and_response
|
72 |
+
full_prompt += eos_token
|
73 |
+
|
74 |
+
return full_prompt
|
75 |
+
|
76 |
+
...
|
77 |
+
|
78 |
+
from trl import SFTTrainer
|
79 |
+
|
80 |
+
trainer = SFTTrainer(
|
81 |
+
model=model,
|
82 |
+
peft_config=peft_config,
|
83 |
+
max_seq_length=max_seq_length,
|
84 |
+
tokenizer=tokenizer,
|
85 |
+
packing=True,
|
86 |
+
formatting_func=create_prompt, # this will apply the create_prompt mapping to all training and test dataset
|
87 |
+
args=args,
|
88 |
+
train_dataset=instruct_tune_dataset["train"],
|
89 |
+
eval_dataset=instruct_tune_dataset["test"]
|
90 |
+
)
|
91 |
+
```
|