0b05c64
1
2
3
4
5
6
7
# GPT4 x Alpaca As a base model we used: https://huggingface.co/chavinlo/alpaca-13b Finetuned on GPT4's responses, for 3 epochs. NO LORA