T5-Small Fine-Tuned for Text-to-SQL with LoRA
This model is a fine-tuned version of t5-small on text-to-SQL data using LoRA (Low-Rank Adaptation).
Model description
The model was trained using PEFT's LoRA implementation with the following configuration:
- rank (r): 8
- lora_alpha: 32
- Target modules: query and value projection matrices in attention
Intended uses & limitations
This model is intended to be used for converting natural language queries to SQL statements.
Training procedure
The model was trained using the following hyperparameters:
- Learning rate: 1e-3
- Batch size: 8
- Training steps: 10,000
- Optimizer: AdamW
- LR scheduler: constant
Limitations and bias
This model inherits the limitations and biases from the original T5-small model.
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support