What was the training setting of qformer, LLM?

#9
by vigneshwar472 - opened

I am working on Dense Video Captioning task on UCA datasets. I want fine-tune qformer and LLM (LORA).

What was the training setting and hyper parameters you used for pre training?

OpenGVLab org

In the first stage of pre-training (QFormer), the configuration file can refer to the https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/scripts/videochat_vicuna/config_7b_stage1.py, in the stage2 of aligning with the Mistral LLM can refer to the https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/scripts/videochat_mistral/config_7b_stage2.py, in the stage3 of LORA SFT, the configuration file can refer to the https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/scripts/videochat_mistral/config_7b_stage3.py;

Sign up or log in to comment