Same methodology as Kalomaze's 16B experiment : https://huggingface.co/kalomaze/Qwen3-16B-A3B/
- measure the probability that any given expert will activate (over a personal set of fairly diverse calibration data), per layer
- prune some of the least used experts per layer (with reordered router and indexing per layer)
Currently it is unusable but i am working on training it over a small SFT of claude Instruct data to "heal" it per say.
https://wandb.ai/new-eden/Prune-Experiments/runs/45utvk5c?nw=nwuserdeltavector
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Delta-Vector/Qwen-3-150B
Base model
Qwen/Qwen3-235B-A22B