File size: 2,711 Bytes
41e77b6
 
c82f45e
 
 
 
 
 
 
 
e216cf3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff849e9
e216cf3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b389d5f
 
 
 
e216cf3
 
 
 
ff849e9
b389d5f
 
ff849e9
e216cf3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0
language:
- en
tags:
- text-to-sql
- text to sql
pretty_name: Presto/Athena Text to SQL Dataset
size_categories:
- 1K<n<10K
---

I created this dataset using [sqlglot](https://github.com/tobymao/sqlglot) to auto-convert the Spider and Wikisql datasets to Presto syntax, along with running some regex's for additional cleanup.

An example use case is fine-tuning an existing model to respond with Presto/Athena text-to-sql, if it performs well at standard SQL syntax used by the major text to sql training datasets.

Example of fine-tuning using this dataset (in this case for Mystral 7b Instruct):

```
import json
import pandas as pd
from datasets import Dataset

def read_jsonl(file_path):
    data = []
    with open(file_path, 'r', encoding='utf-8') as file:
        for line in file:
            json_data = json.loads(line.strip())
            data.append(json_data)
    return data

# Read the train and validation files
train_data = read_jsonl('training_data/train.jsonl')
valid_data = read_jsonl('training_data/valid.jsonl')

# Convert to pandas DataFrame
train_df = pd.DataFrame(train_data)
valid_df = pd.DataFrame(valid_data)

# Convert DataFrame to Huggingface Dataset
train_dataset = Dataset.from_pandas(train_df)
valid_dataset = Dataset.from_pandas(valid_df)

# Example of processing
# train_texts = [example['text'] for example in train_dataset]
# valid_texts = [example['text'] for example in valid_dataset]

instruct_tune_dataset = {
    "train": train_dataset,
    "test": valid_dataset
}

...

def create_prompt(sample):
  """
  Update the prompt template:
  Combine both the prompt and input into a single column.

  """
  bos_token = "<s>"
  original_system_message = "Below is an instruction that describes a task. Write a response that appropriately completes the request."
  system_message = "Write a SQL query or use a function to answer the following question. Use the SQL dialect Presto for AWS Athena."
  question = sample["question"].replace(original_system_message, "").strip()
  response = sample["answer"].strip()
  eos_token = "</s>"

  full_prompt = ""
  full_prompt += bos_token
  full_prompt += "[INST] <<SYS>>" + system_message + "<</SYS>>\n\n"
  full_prompt += question + " [/INST] "
  full_prompt += response
  full_prompt += eos_token

  return full_prompt
...

from trl import SFTTrainer

trainer = SFTTrainer(
  model=model,
  peft_config=peft_config,
  max_seq_length=max_seq_length,
  tokenizer=tokenizer,
  packing=True,
  formatting_func=create_prompt, # this will apply the create_prompt mapping to all training and test dataset
  args=args,
  train_dataset=instruct_tune_dataset["train"],
  eval_dataset=instruct_tune_dataset["test"]
)
```