File size: 2,290 Bytes
9d6c259
 
 
 
 
 
3772714
 
 
 
 
 
 
9d6c259
 
3772714
 
 
 
 
 
 
9d6c259
 
 
 
 
3772714
 
 
 
 
 
 
 
9d6c259
 
 
3772714
9d6c259
3772714
9d6c259
 
 
 
 
3772714
77efdcf
 
9d6c259
3772714
9d6c259
 
3772714
9d6c259
 
 
 
 
 
 
 
3772714
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: mit
language:
- en
base_model:
- facebook/esm2_t33_650M_UR50D
tags:
- protein-classification
- bioinformatics
- anticancer
- esm2
- transformers
- torch
---

# ANTICP3: Anticancer Protein Prediction

This model is a fine-tuned version of [`facebook/esm2-t33-650M-UR50D`](https://huggingface.co/facebook/esm2_t33_650M_UR50D) designed for **binary classification of anticancer proteins (ACPs)** from their primary sequence.

> **Developed by**: [G. P. S. Raghava Lab, IIIT-Delhi](https://webs.iiitd.edu.in/raghava/)
>  
> **Model hosted by**: [Dr. GPS Raghava's Group](https://huggingface.co/raghavagps-group/anticp3)

---

## Model Details

| Feature            | Description                                                  |
|--------------------|--------------------------------------------------------------|
| **Base Model**     | [`facebook/esm2_t33_650M_UR50D`](https://huggingface.co/facebook/esm2_t33_650M_UR50D) |
| **Fine-tuned On**  | Anticancer Protein Dataset |
| **Model Type**     | Binary Classification                            |
| **Labels**         | `0`: Non-Anticancer<br>`1`: Anticancer                      |
| **Framework**      | [Transformers](https://huggingface.co/docs/transformers) + PyTorch |
| **Format**         | `safetensors`                                                |

---

## Usage

Use this model with the Hugging Face `transformers` library:

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load tokenizer and fine-tuned model
tokenizer = AutoTokenizer.from_pretrained("raghavagps-group/anticp3")
model = AutoModelForSequenceClassification.from_pretrained("raghavagps-group/anticp3")

# Example protein sequence
sequence = "MANCVVGYIGERCQYRDLKWWELRGGGGSGGGGSAPAFSVSPASGLSDGQSVSVSVSGAAAGETYYIAQCAPVGGQDACNPATATSFTTDASGAASFSFVVRKSYTGSTPEGTPVGSVDCATAACNLGAGNSGLDLGHVALTFGGGGGSGGGGSDHYNCVSSGGQCLYSACPIFTKIQGTCYRGKAKCCKLEHHHHHH"

# Tokenize and run inference
inputs = tokenizer(sequence, return_tensors="pt", truncation=True)

with torch.no_grad():
    logits = model(**inputs).logits
    probs = torch.nn.functional.softmax(logits, dim=-1)
    prediction = torch.argmax(probs, dim=1).item()

labels = {0: "Non-Anticancer", 1: "Anticancer"}
print("Prediction:", labels[prediction])