Model Card for DraGNOME-2.5b-v1
This model is a fine-tuned version of the Nucleotide Transformer (2.5B parameters, multi-species) for Antimicrobial Resistance (AMR) prediction, optimized for handling class imbalance and training efficiency.
Model Details
Model Description
This model is a fine-tuned version of InstaDeepAI's Nucleotide Transformer (2.5B parameters, multi-species) designed for binary classification of nucleotide sequences to predict Antimicrobial Resistance (AMR). It leverages LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning and includes optimizations for class imbalance and training efficiency, with checkpointing to handle Google Colab's 24-hour runtime limit. The model was trained on a dataset of positive (AMR) and negative (non-AMR) sequences.
- Developed by: Blaise Alako
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: alakob
- Model type: Sequence Classification
- Language(s) (NLP): Nucleotide sequences
- License: [More Information Needed]
- Finetuned from model [optional]: InstaDeepAI/nucleotide-transformer-2.5b-multi-species
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
This model can be used directly for predicting whether a given nucleotide sequence is associated with Antimicrobial Resistance (AMR) without additional fine-tuning.
Downstream Use
The model can be further fine-tuned for specific AMR-related tasks or integrated into larger bioinformatics pipelines for genomic analysis.
Out-of-Scope Use
The model is not intended for general-purpose sequence analysis beyond AMR prediction, nor for non-biological sequence data. Misuse could include applying it to unrelated classification tasks where its training data and architecture are not applicable.
Bias, Risks, and Limitations
The model may exhibit bias due to imbalances in the training dataset or underrepresentation of certain AMR mechanisms. It is limited by the quality and diversity of the training sequences and may not generalize well to rare or novel AMR variants.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Validation on diverse datasets and careful interpretation of predictions are recommended.
How to Get Started with the Model
Use the code below to get started with the model:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import get_peft_model, LoraConfig
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-2.5b-multi-species")
model = AutoModelForSequenceClassification.from_pretrained("alakob/DraGNOME-2.5b-v1")
# Example inference
sequence = "ATGC..." # Replace with your nucleotide sequence
inputs = tokenizer(sequence, truncation=True, max_length=1000, return_tensors="pt")
outputs = model(**inputs)
prediction = outputs.logits.argmax(-1).item() # 0 = non-AMR, 1 = AMR
Training Details
Training Data
Negative sequences (non-AMR):
DSM_20231.fasta
,ecoli-k12.fasta
,FDA.fasta
Positive sequences (AMR):
28227009.fasta
,nucleotide_fasta_protein_homolog_model_variants.fasta
,40859916.fasta
,nucleotide_fasta_protein_overexpression_model_variants.fasta
,all_resfinder.fasta
,nucleotide_fasta_protein_variant_model_variants.fasta
,efaecium.fasta
,nucleotide_fasta_rRNA_gene_variant_model_variants.fasta
Training Procedure
Preprocessing
Sequences were tokenized using the Nucleotide Transformer tokenizer with a maximum length of 1000 tokens and truncation applied where necessary.
Training Hyperparameters
- Training regime: fp16 mixed precision
- Learning rate: 5e-5
- Batch size: 8 (with gradient accumulation steps = 8)
- Epochs: 10
- Optimizer: AdamW (default in Hugging Face Trainer)
- Scheduler: Linear with 10% warmup
- LoRA parameters:
r=32
,alpha=64
,dropout=0.1
,target_modules=["query", "value"]
Speeds, Sizes, Times
Training was performed on Google Colab with checkpointing every 500 steps, retaining the last 3 checkpoints.
Exact throughput and times depend on Colab's hardware allocation NVIDIA A100 GPU.
Evaluation
Testing Data, Factors & Metrics
Testing Data
The test set was derived from a 10% split of the DraGNOME-2.5b-v1 dataset, stratified by AMR labels.
Factors
Evaluation was performed across AMR and non-AMR classes.
Metrics
- Accuracy: Proportion of correct predictions
- F1 Score: Harmonic mean of precision and recall (primary metric)
- Precision: Positive predictive value
- Recall: Sensitivity
- ROC-AUC: Area under the receiver operating characteristic curve
Results
[More Information Needed]
Summary
[More Information Needed]
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: Google Colab NVIDIA A100 GPU
- Hours used: [More Information Needed]
- Cloud Provider: Google Colab
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
The model uses the Nucleotide Transformer architecture (2.5B parameters) with a sequence classification head, fine-tuned with LoRA for AMR prediction.
Compute Infrastructure
Training was performed on Google Colab with persistent storage via Google Drive.
Hardware
- NVIDIA A100 GPU
Software
- Transformers (Hugging Face)
- PyTorch
- PEFT (Parameter-Efficient Fine-Tuning)
- Weights & Biases (wandb) for logging
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary
- AMR: Antimicrobial Resistance
- LoRA: Low-Rank Adaptation
- Nucleotide Transformer: A transformer-based model for nucleotide sequence analysis
More Information [optional]
[More Information Needed]
Model Card Authors
Blaise Alako
Model Card Contact
[More Information Needed]