File size: 2,206 Bytes
f6e8ada 79dfc25 f6e8ada c3c4126 248dc67 79dfc25 a1f8b20 79dfc25 f6e8ada c3c4126 df96a96 79dfc25 9b3cb7c 79dfc25 c3c4126 79dfc25 c3c4126 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
language:
- code
license: bigcode-openrail-m
datasets:
- bigcode/the-stack-dedup
- Vipitis/Shadertoys-fine
pipeline_tag: text-generation
tags:
- code
- shader
base_model: bigcode/santacoder
widget:
- text: void mainImage( out vec4 fragColor, in vec2 fragCoord )
example_title: mainImage
group: Shadertoy
model-index:
- name: santacoder-finetuned-the-stack-glsl
results:
- task:
type: text-generation
name: ShaderEval
dataset:
type: Vipitis/Shadertoys-fine
name: Shadertoys-fine
config: return_completion
revision: 0.0.2
metrics:
- type: exact_match
value: 0.567
name: 300 samples, greedy decoding
verified: false
- type: exact_match
value: 0.59749
name: all samples, greedy decoding
verified: false
---
[Santacoder](https://huggingface.co/bigcode/santacoder) finetuned on [Shadertoys-fine](https://huggingface.co/datasets/Vipitis/Shadertoys-fine) for 1000 steps with a batch size of 2 and full sequence length of 2048.
adapted finetuning script found [here](./train.py)
Try model in the [ShaderCoder](https://huggingface.co/spaces/Vipitis/ShaderCoder) demo space
### Finetuning parameters
```sh
python3 train.py --model_path "bigcode/santacoder" \
--dataset_name "Vipitis/Shadertoys-fine" \
--data_column "code" \
--split "train" \
--seq_length 2048 \
--max_steps 1000 \
--batch_size 2 \
--gradient_accumulation_steps 4 \
--learning_rate 5e-5 \
--num_warmup_steps 100 \
--eval_freq 100 \
--save_freq 100 \
--log_freq 1 \
--output_dir "checkpoint_dir" \
--no_fp16
```
Main purpose of this model is to explore if finetuning models improves performance on [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderEval), which reached 0.567 with 300 samples and 0.59749 on all samples.
### Disclaimer
While the train/test split is held out, there is a lot of data contamination. The model results can't be trusted for this simple benchmark.
Better tasks for the benchmark will be developed and tested against these models.
License carried over from model, however training data has an undefied license. Check details in [Shadertoys](https://huggingface.co/datasets/Vipitis/Shadertoys). |