Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- protein-language-models
|
6 |
+
- sparse-autoencoder
|
7 |
+
license: mit
|
8 |
+
---
|
9 |
+
|
10 |
+
# Sparse Autoencoders for ESM-2 (8M)
|
11 |
+
|
12 |
+
Interpret protein language model representations using sparse autoencoders trained on ESM-2 (8M) layers. These models decompose complex neural representations into interpretable features, enabling deeper understanding of how protein language models process sequence information.
|
13 |
+
|
14 |
+
* 📊 Model details in the [InterPLM pre-print](https://www.biorxiv.org/content/10.1101/2024.11.14.623630v1)
|
15 |
+
* 👩💻 Training and analysis code in the [GitHub repo](https://github.com/ElanaPearl/InterPLM)
|
16 |
+
* 🧬 Explore features at [interPLM.ai](interplm.ai)
|
17 |
+
|
18 |
+
## Model Details
|
19 |
+
- Base Model: ESM-2 8M (6 layers)
|
20 |
+
- Architecture: Sparse Autoencoder
|
21 |
+
- Input Dimension: 320
|
22 |
+
- Feature Dimension: 10,240
|
23 |
+
|
24 |
+
## Available Models
|
25 |
+
|
26 |
+
We provide SAE models trained on different layers of ESM-2-8M:
|
27 |
+
|
28 |
+
| Model name | ESM2 model | ESM2 layer |
|
29 |
+
|-|-|-|
|
30 |
+
| [InterPLM-esm2-8m-l1](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_1) | esm2_t6_8m_UR50D | 1 |
|
31 |
+
| [InterPLM-esm2-8m-l2](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_2) | esm2_t6_8m_UR50D | 2 |
|
32 |
+
| [InterPLM-esm2-8m-l3](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_3) | esm2_t6_8m_UR50D | 3 |
|
33 |
+
| [InterPLM-esm2-8m-l4](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_4) | esm2_t6_8m_UR50D | 4 |
|
34 |
+
| [InterPLM-esm2-8m-l5](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_5) | esm2_t6_8m_UR50D | 5 |
|
35 |
+
| [InterPLM-esm2-8m-l6](https://huggingface.co/Elana/InterPLM-esm2-8m/tree/main/layer_6) | esm2_t6_8m_UR50D | 6 |
|
36 |
+
|
37 |
+
All models share the same architecture and dictionary size (10,240). See [here](https://huggingface.co/Elana/InterPLM-esm2-650m) for SAEs trained on ESM-2 650M. The 650M SAEs capture more known biological concepts than the 8M but require additional compute for both ESM embedding and SAE feature extraction.
|
38 |
+
|
39 |
+
## Usage
|
40 |
+
|
41 |
+
Extract interpretable features from protein sequences:
|
42 |
+
|
43 |
+
```python
|
44 |
+
from huggingface_hub import hf_hub_download
|
45 |
+
from interplm.sae.inference import load_model
|
46 |
+
from interplm.esm.embed import embed_list_of_prot_seqs
|
47 |
+
|
48 |
+
# Select ESM layer (must be one of 1-6)
|
49 |
+
layer_num = 4
|
50 |
+
|
51 |
+
# Download and load the model
|
52 |
+
weights_path = hf_hub_download(
|
53 |
+
repo_id=f"Elana/InterPLM-esm2-8m",
|
54 |
+
filename=f"layer_{layer_num}/ae_normalized.pt"
|
55 |
+
)
|
56 |
+
sae = load_model()
|
57 |
+
|
58 |
+
# Get ESM embeddings for protein
|
59 |
+
protein_embeddings = embed_single_sequence(sequence="MRWQEMGYIFYPRKLR",
|
60 |
+
model_name="esm2_t6_8M_UR50D",
|
61 |
+
layer=layer_num)
|
62 |
+
|
63 |
+
# Extract features
|
64 |
+
features = sae.encode(protein_embeddings)
|
65 |
+
```
|
66 |
+
|
67 |
+
For detailed training and analysis examples, see the [GitHub README](https://github.com/ElanaPearl/InterPLM/blob/main/README.md).
|
68 |
+
|
69 |
+
## Model Variants
|
70 |
+
|
71 |
+
Each layer model is available in two variants:
|
72 |
+
|
73 |
+
- Normalized (`ae_normalized.pt`): Features are L2-normalized before encoding, making the magnitude of activations consistent across different inputs. This can improve interpretability by focusing on relative feature patterns rather than absolute magnitudes. Recommended for most analyses focused on feature interpretation.
|
74 |
+
|
75 |
+
- Unnormalized (`ae_unnormalized.pt`): Raw activation features without normalization. These preserve the original magnitude information from the ESM model, which can be important for tasks where activation strength carries meaningful signal. Use these if you need to analyze absolute activation magnitudes or when combining features with other ESM-based tools.
|