Impressive
#1
by
captainst
- opened
Really impressive. The model is able to answer questions based on the software log input of about 2K tokens.
My first test shows that the performance is comparable to bigger models like 13b ones, and sometimes even better.
Awesome, thank you for letting us know!
Model performance comparison with before version on the MODEL CARD: https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md
This model is built on OpenLLaMA, not Llama 2. It was fine-tuned on version 2 of our open-instruct training set.
This model is built on OpenLLaMA, not Llama 2. It was fine-tuned on version 2 of our open-instruct training set.
Oh sorry, I mixed them. Just because LLaMA2 is too hot yesterday.