Sparse Autoencoders for ESM-2 (8M)

Interpret protein language model representations using sparse autoencoders trained on ESM-2 (8M) layers. These models decompose complex neural representations into interpretable features, enabling deeper understanding of how protein language models process sequence information.

Model Details

  • Base Model: ESM-2 8M (6 layers)
  • Architecture: Sparse Autoencoder
  • Input Dimension: 320
  • Feature Dimension: 10,240

Available Models

We provide SAE models trained on different layers of ESM-2-8M:

Model name ESM2 model ESM2 layer
InterPLM-esm2-8m-l1 esm2_t6_8m_UR50D 1
InterPLM-esm2-8m-l2 esm2_t6_8m_UR50D 2
InterPLM-esm2-8m-l3 esm2_t6_8m_UR50D 3
InterPLM-esm2-8m-l4 esm2_t6_8m_UR50D 4
InterPLM-esm2-8m-l5 esm2_t6_8m_UR50D 5
InterPLM-esm2-8m-l6 esm2_t6_8m_UR50D 6

All models share the same architecture and dictionary size (10,240). See here for SAEs trained on ESM-2 650M. The 650M SAEs capture more known biological concepts than the 8M but require additional compute for both ESM embedding and SAE feature extraction.

Usage

Extract interpretable features from protein sequences:

from interplm.sae.inference import load_sae_from_hf
from interplm.esm.embed import embed_single_sequence

# Get ESM embeddings for protein sequence
embeddings = embed_single_sequence(
   sequence="MRWQEMGYIFYPRKLR",
   model_name="esm2_t6_8M_UR50D",
   layer=4  # Choose ESM layer (1-6)
)

# Load SAE model and extract features 
sae = load_sae_from_hf(plm_model="esm2-8m", plm_layer=4)
features = sae.encode(embeddings)

For detailed training and analysis examples, see the GitHub README.

Model Variants

The SAEs we've trained have arbitrary scales between features since encoder/decoder weights could be linearly scaled without changing reconstructions. To make features comparable, we normalize them to activate between 0-1 based on max activation values from Swiss-Prot (since this is our primary analysis dataset). By default, use our pre-normalized SAEs (ae_normalized.pt). As this might not perfectly scale features not present in Swiss-Prot proteins, for custom normalization use ae_unnormalized.pt with this code.

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Elana/InterPLM-esm2-8m