Datasets:
SMILES
stringlengths 2
247
| ID
int64 1
8.57k
| Y
class label 2
classes | MW
float64 53
2.27k
|
---|---|---|---|
CN=NO | 1 | 11
| 60.03 |
OCCI | 2 | 11
| 171.94 |
C/C=C/Cl | 10 | 11
| 76.01 |
C=C(Cl)Cl | 13 | 11
| 95.95 |
FC(F)Cl | 14 | 11
| 85.97 |
ClC(Cl)Br | 22 | 11
| 161.86 |
C/C=C\Cl | 25 | 11
| 76.01 |
OCCBr | 27 | 11
| 123.95 |
OCCCl | 28 | 11
| 80 |
CCOO | 29 | 11
| 62.04 |
C=CCCl | 30 | 11
| 76.01 |
NCCN | 31 | 11
| 60.07 |
C=CC=C | 33 | 11
| 54.05 |
CN(C)N | 34 | 11
| 60.07 |
CC(C)Br | 35 | 11
| 121.97 |
CC(C)Cl | 36 | 11
| 78.02 |
O=CCCl | 37 | 11
| 77.99 |
BrCCBr | 38 | 11
| 185.87 |
ClCCCl | 40 | 11
| 97.97 |
C#CCO | 42 | 11
| 56.03 |
NCCCl | 43 | 11
| 79.02 |
C=CC=O | 44 | 11
| 56.03 |
O=CC=O | 46 | 11
| 58.01 |
ClCCBr | 47 | 11
| 141.92 |
C[C@@H](O)CBr | 48 | 11
| 137.97 |
O=C(O)CI | 49 | 11
| 185.92 |
FC[C@H]1CO1 | 50 | 11
| 76.03 |
C[C@@H](O)CCl | 51 | 11
| 94.02 |
C=C(Cl)C=O | 54 | 11
| 89.99 |
NNCCO | 55 | 11
| 76.06 |
N#C[C@@H](Cl)Br | 56 | 11
| 152.9 |
OC[C@H]1CO1 | 57 | 11
| 74.04 |
BrC[C@H]1CO1 | 58 | 11
| 135.95 |
OCC(Cl)Cl | 59 | 11
| 113.96 |
C[C@@H](O)CN | 60 | 11
| 75.07 |
C[C@@H](Cl)CCl | 61 | 11
| 111.98 |
CC[C@H]1CO1 | 63 | 11
| 72.06 |
C=C(Br)CCl | 66 | 11
| 153.92 |
C[C@@H](Br)CBr | 67 | 11
| 199.88 |
CN(C)N=O | 68 | 11
| 74.05 |
CCC(C)Br | 70 | 11
| 135.99 |
CC(C)C#N | 71 | 11
| 69.06 |
CCC1CO1 | 72 | 11
| 72.06 |
ClCC1CO1 | 73 | 11
| 92 |
C/C=C/C=O | 74 | 11
| 70.04 |
CC(O)CCl | 75 | 11
| 94.02 |
OCC1CO1 | 76 | 11
| 74.04 |
O=C(Br)CBr | 77 | 11
| 199.85 |
ClCOCCl | 80 | 11
| 113.96 |
FCC1CO1 | 81 | 11
| 76.03 |
O=CCC=O | 82 | 11
| 72.02 |
BrCC1CO1 | 86 | 11
| 135.95 |
CC(O)CBr | 90 | 11
| 137.97 |
ClC[C@@H]1CO1 | 91 | 11
| 92 |
N#CC(Cl)Br | 92 | 11
| 152.9 |
CC(Cl)=C=O | 94 | 11
| 89.99 |
OC/C=C\Cl | 95 | 11
| 92 |
OC/C=C/Cl | 96 | 11
| 92 |
C=C(Br)C=O | 103 | 11
| 133.94 |
OCCCCl | 104 | 11
| 94.02 |
C=C(C)C=O | 106 | 11
| 70.04 |
NC(=O)NO | 108 | 11
| 76.03 |
CCCCBr | 109 | 11
| 135.99 |
CC(=O)NN | 110 | 11
| 74.05 |
C/C=C\C=O | 112 | 11
| 70.04 |
O=[N+]([O-])CCl | 113 | 11
| 94.98 |
N#CC(Cl)Cl | 114 | 11
| 108.95 |
CC(=O)NO | 115 | 11
| 75.03 |
O=CC(=O)O | 116 | 11
| 74 |
C=C(Br)CBr | 117 | 11
| 197.87 |
COCC=O | 118 | 11
| 74.04 |
ClC[C@H]1CO1 | 119 | 11
| 92 |
O=C(O)CBr | 120 | 11
| 137.93 |
OCCCBr | 121 | 11
| 137.97 |
CCCOO | 122 | 11
| 76.05 |
Cl/C=C\CCl | 123 | 11
| 109.97 |
C=CC(C)=O | 124 | 11
| 70.04 |
O=C1CCO1 | 125 | 11
| 72.02 |
ClCCCBr | 126 | 11
| 155.93 |
CC(C)(C)Br | 127 | 11
| 135.99 |
CC(Cl)CO | 128 | 11
| 94.02 |
N#CC(Br)Br | 129 | 11
| 196.85 |
CC(=O)C=O | 130 | 11
| 72.02 |
OC[C@@H]1CO1 | 132 | 11
| 74.04 |
CNC(=O)ON | 133 | 11
| 90.04 |
O=C(CO)CO | 135 | 11
| 90.03 |
OC[C@H](Cl)CCl | 137 | 11
| 127.98 |
C/C(Cl)=C\C=O | 138 | 11
| 104 |
OCC(Cl)(Cl)Cl | 140 | 11
| 147.92 |
BrCC(Br)(Br)Br | 144 | 11
| 341.69 |
ClC[C@H](Br)CBr | 149 | 11
| 233.84 |
N#CC(Cl)(Cl)Cl | 151 | 11
| 142.91 |
BrCC(Br)CBr | 152 | 11
| 277.79 |
COC[C@H]1CO1 | 155 | 11
| 88.05 |
ClCC(Br)CCl | 156 | 11
| 189.9 |
O=C(O)[C@@H](Cl)Br | 157 | 11
| 171.89 |
[N-]=[N+]=NCCO | 158 | 11
| 87.04 |
CCN(C)N=O | 160 | 11
| 88.06 |
ClC[C@H](Cl)CBr | 163 | 11
| 189.9 |
COCCOC | 165 | 11
| 90.07 |
Mutagenicity Optimization (MutagenLou2023)
In the original paper, they collected the Ames records from Hansen’s benchmark (6512 compounds) and the ISSSTY database (6052 compounds). After data preparation, a total of 8576 compounds with structural diversity were obtained, including 4643 Ames positives and 3933 Ames negatives. The comprehensive data set was then split into a training set including 7720 compounds and a test set containing 856 compounds. Overall, the numbers of negatives and positives in this data set were balanced with a ratio of 0.847 (Neg./Pos.). In addition, 805 approved drugs from DrugBank that were not involved in the training set, and 664 Ames strong positive samples from DGM/NIHS were built as an external validation set.
Data splits
Here we have used the Realistic Split method described in (Martin et al., 2018) to split the MutagenLou2023 dataset.
Quickstart Usage
Load a dataset in python
Each subset can be loaded into python using the Huggingface datasets library.
First, from the command line install the datasets
library
$ pip install datasets
then, from within python load the datasets library
>>> import datasets
and load one of the MutagenLou2023
datasets, e.g.,
>>> train_test = datasets.load_dataset("maomlab/MutagenLou2023", name = "train_test")
Downloading readme: 100%|██████████|  7.15k/7.15k [00:00<00:00, 182kB/s]
Downloading data: 100%|██████████| 306k/306k [00:00<00:00, 4.45MkB/s]
Downloading data: 100%|██████████| 115k/115k [00:00<00:00, 668kkB/s]
Generating train split: 100%|██████████| 6862/6862 [00:00<00:00, 70528.05examples/s]
Generating test split: 100%|██████████| 1714/1714 [00:00<00:00, 22632.33 examples/s]
and inspecting the loaded dataset
>>> train_test
DatasetDict({
train: Dataset({
features: ['SMILES', 'ID', 'Y', 'MW'],
num_rows: 6862
})
test: Dataset({
features: ['SMILES', 'ID', 'Y', 'MW'],
num_rows: 1714
})
})
Use a dataset to train a model
One way to use the dataset is through the MolFlux package developed by Exscientia.
First, from the command line, install MolFlux
library with catboost
and rdkit
support
pip install 'molflux[catboost,rdkit]'
then load, featurize, split, fit, and evaluate the a catboost model
import json
from datasets import load_dataset
from molflux.datasets import featurise_dataset
from molflux.features import load_from_dicts as load_representations_from_dicts
from molflux.splits import load_from_dict as load_split_from_dict
from molflux.modelzoo import load_from_dict as load_model_from_dict
from molflux.metrics import load_suite
split_dataset = load_dataset('maomlab/MutagenLou2023', name = 'train_test')
split_featurised_dataset = featurise_dataset(
split_dataset,
column = "SMILES",
representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))
model = load_model_from_dict({
"name": "cat_boost_classifier",
"config": {
"x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'],
"y_features": ['Y']}})
model.train(split_featurised_dataset["train"])
preds = model.predict(split_featurised_dataset["test"])
classification_suite = load_suite("classification")
scores = classification_suite.compute(
references=split_featurised_dataset["test"]['Y'],
predictions=preds["cat_boost_classifier::Y"])
Citation
TY - JOUR AU - Lou, Chaofeng AU - Yang, Hongbin AU - Deng, Hua AU - Huang, Mengting AU - Li, Weihua AU - Liu, Guixia AU - Lee, Philip W. AU - Tang, Yun PY - 2023 DA - 2023/03/20 TI - Chemical rules for optimization of chemical mutagenicity via matched molecular pairs analysis and machine learning methods JO - Journal of Cheminformatics SP - 35 VL - 15 IS - 1 AB - Chemical mutagenicity is a serious issue that needs to be addressed in early drug discovery. Over a long period of time, medicinal chemists have manually summarized a series of empirical rules for the optimization of chemical mutagenicity. However, given the rising amount of data, it is getting more difficult for medicinal chemists to identify more comprehensive chemical rules behind the biochemical data. Herein, we integrated a large Ames mutagenicity data set with 8576 compounds to derive mutagenicity transformation rules for reversing Ames mutagenicity via matched molecular pairs analysis. A well-trained consensus model with a reasonable applicability domain was constructed, which showed favorable performance in the external validation set with an accuracy of 0.815. The model was used to assess the generalizability and validity of these mutagenicity transformation rules. The results demonstrated that these rules were of great value and could provide inspiration for the structural modifications of compounds with potential mutagenic effects. We also found that the local chemical environment of the attachment points of rules was critical for successful transformation. To facilitate the use of these mutagenicity transformation rules, we integrated them into ADMETopt2 (http://lmmd.ecust.edu.cn/admetsar2/admetopt2/), a free web server for optimization of chemical ADMET properties. The above-mentioned approach would be extended to the optimization of other toxicity endpoints. SN - 1758-2946 UR - https://doi.org/10.1186/s13321-023-00707-x DO - 10.1186/s13321-023-00707-x ID - Lou2023 ER -
- Downloads last month
- 100