Mol-Llama-3.1-8B-Instruct-Full-Weights
[Project Page] [Paper] [GitHub]
This repo contains the weights of Mol-LLaMA including the LoRA weights and projectors, build with Llama: meta-llama/Llama-3.1-8B-Instruct. Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
Architecture
- Molecular encoders: Pretrained 2D encoder (MoleculeSTM) and 3D encoder (Uni-Mol)
- Blending Module: Combining complementary information from 2D and 3D encoders via cross-attention
- Q-Former: Embed molecular representations into query tokens based on SciBERT
- LoRA: Adapters for fine-tuning LLMs
Training Dataset
Mol-LLaMA is trained on Mol-LLaMA-Instruct, to learn the fundamental characteristics of molecules with the reasoning ability and explanbility.
Citation
If you find our model useful, please consider citing our work.
@misc{kim2025molllama,
title={Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model},
author={Dongki Kim and Wonbin Lee and Sung Ju Hwang},
year={2025},
eprint={2502.13449},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Acknowledgements
We appreciate LLaMA, 3D-MoLM, MoleculeSTM, Uni-Mol and SciBERT for their open-source contributions.
- Downloads last month
- 1
Model tree for DongkiKim/Mol-Llama-3.1-8B-Instruct-Full-Weights
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct