LLM-jp-3-1.8B SAE

This repository provides a TopK Sparse Autoencoder (SAE) trained on LLM-jp-3-1.8B, developed by the Research and Development Center for Large Language Models at the National Institute of Informatics, Japan.

Usage

Python version: 3.10.12

See the README.md in the github repository for the usage.

Downloads last month
8
Safetensors
Model size
134M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including llm-jp/llm-jp-3-1.8b-sae-l12-k32-16x-c988240