Questions about architecture (+ LoRA)

#16
by alex0dd - opened

Hello!
You mention that Smaug is finetuned from https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO, which itself is finetuned on https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4, which uses LoRA.
However neither of MoMo-72B-lora-1.8.7-DPO or MoMo-72B-LoRA-V1.4 provide LoRA weights.

So my question are:

  1. Was Smaug-72B directly finetuned on https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO without using LoRA weights (i.e., they were merged into model's weights)?
  2. Are LoRA weights needed to correctly evaluate Smaug-72B's accuracy?

For both MoMo and Smaug, the Lora weights are merged back into the base weights, so no extra adapter weights are necessary.
See: https://huggingface.co/docs/peft/main/en/developer_guides/lora#merge-adapters

Hi! Did you also use LoRA to finetune or did you do the full finetune? If full, what setup did you use on your 8xH100 machine to achieve a full finetune of such a large model?

Sign up or log in to comment