DeepSeek-R1-Lite

#6
by Dampfinchen - opened

Hello. Those distills are good and all, but they don't have much to do with R1 as its a completely different architecture. What I would like to see is R1 on the same architecture, just scaled down similar to how V2-Lite was. I think that would be much more compelling and it would also support cutting edge features like MLA which cuts down memory usage during inference a lot.

Yes V2 Lite was the perfect MoE size.

Yes V2 Lite was the perfect MoE size.

V2 Lite is a little small perhaps. I think the Qwen 3 MoE has almost the perfect size. Maybe with 2B more activated parameters it could be even more capable.

Yes please!

No. That's the meaning of distill. Without changing an arch, just use new data from teacher to train!

We already have deepseek V3 lite at home
https://huggingface.co/moonshotai/Moonlight-16B-A3B-Instruct
Deepseek could continue pretrain and supervised fine-tune

We already have deepseek V3 lite at home
https://huggingface.co/moonshotai/Moonlight-16B-A3B-Instruct
Deepseek could continue pretrain and supervised fine-tune

No, Moonlight uses a different tokenizer.

Moonlight uses a different tokenizer.

What's the problem? Dataset is text isn't it? And the tools like axolotl and other will handle the rest

Sign up or log in to comment