Transformers
amirabdullah19852020's picture
Update README.md
b69bd7b verified
metadata
license: apache-2.0
datasets:
  - lukemarks/vader-post-training
base_model:
  - meta-llama/Llama-3.2-1B-Instruct
library_name: transformers

These SAE's were trained on residual outputs of layers using Eleuther's SAE library here: https://github.com/EleutherAI/sae, on a subset of FineWebText given in lukemarks/vader-post-training

You can see the SAE training details at https://huggingface.co/apart/llama3.2_1b_base_saes_vader/blob/main/config.json .

While FVU and dead_pct metrics for each SAE run are saved under the respective layers, e.g., see https://huggingface.co/apart/llama3.2_1b_base_saes_vader/blob/main/model.layers.12/metrics.json