Model Card for Model ID

GitHub | Paper

Model Details

The model, trained using the RLHF/RLAIF methods proposed in the TPO paper by llava, has enhanced trustworthiness and reduced hallucinations.

Model Description

Usage

Please look at GitHub for more details about usage.

Citation

@article{he2024topic,
  title={A Topic-level Self-Correctional Approach to Mitigate Hallucinations in MLLMs},
  author={He, Lehan and Chen, Zeren and Shi, Zhelun and Yu, Tianyu and Shao, Jing and Sheng, Lu},
  journal={arXiv preprint arXiv:2411.17265},
  year={2024}
}
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train helehan/topic-overwrite-llava-7b-lora