π§ͺ Mol-GPT2; long context, pretrained with ZINC-15
This repository hosts a GPT-2-based model for generating SMILES strings, trained on the ZINC 15 dataset. The model follows the architecture and hyperparameter setup of MolGPT (Bagal et al., 2021), and has been fine-tuned to generate valid molecular representations with high accuracy. This model has longer context length (256), whereas the previous model has maximum context lenght 128.
π§ Model Architecture
GPT2Config( vocab_size=10_000, n_positions=256, n_ctx=256, n_embd=256, n_layer=8, n_head=8, resid_pdrop=0.1, embd_pdrop=0.1, attn_pdrop=0.1, )
- Pretrained with fp16 precision on 2Γ H100 GPUs
- Batch size: 1,024
- Max steps: 100,000
- Warmup steps: 10,000
- Evaluation every 10,000 steps
π Performance
Dataset/Metric | This Model | Short Context | MolGPT Baseline |
---|---|---|---|
ZINC15 Validity | 99.76% | 99.68% | N/A |
MOSES Validity | N/A | N/A | 99.4% |
GuacaMol Validity | N/A | N/A | 98.1% |
π Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("jonghyunlee/MolGPT_long_context_pretrained-by-ZINC15")
model = AutoModelForCausalLM.from_pretrained("jonghyunlee/MolGPT_long_context_pretrained-by-ZINC15", torch_dtype=torch.float16)
# Generate molecules
input_ids = tokenizer("CC(=O)OC1=CC=CC=C1C(=O)O", return_tensors="pt").input_ids
outputs = model.generate(input_ids, max_length=256, do_sample=True, top_k=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 78
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support