|
--- |
|
language: |
|
- en |
|
tags: |
|
- llama |
|
- instruction-tuning |
|
- large-language-model |
|
- autoregressive |
|
- text-generation |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
This is the HuggingFace model release of the instruction tuned LLAMA-7B model used in our paper [FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation](https://arxiv.org/abs/2305.14251). |
|
|
|
Please refer to the README for instructions on how to setup the model ([link](https://github.com/shmsw25/FActScore#download-the-data)). |
|
|
|
Credits to [Yizhong Wang](https://homes.cs.washington.edu/~yizhongw/) for originally training this model. |