MaziyarPanahi commited on
Commit
965005a
·
verified ·
1 Parent(s): cf4ca48

feat: Upload fine-tuned medical NER model OpenMed-NER-GenomicDetect-SuperMedical-355M

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ openmed_vs_sota_grouped_bars.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ widget:
3
+ - text: "The BRCA2 gene is associated with hereditary breast cancer."
4
+ - text: "Mutations in the CFTR gene cause cystic fibrosis."
5
+ - text: "The APOE gene variant affects Alzheimer's disease risk."
6
+ - text: "The HTT gene provides instructions for making a protein called huntingtin."
7
+ - text: "Sickle cell disease is caused by a mutation in the HBB gene."
8
+ tags:
9
+ - token-classification
10
+ - named-entity-recognition
11
+ - biomedical-nlp
12
+ - transformers
13
+ - gene-recognition
14
+ - genetics
15
+ - genomics
16
+ - molecular-biology
17
+ - cell-line-name
18
+ language:
19
+ - en
20
+ license: apache-2.0
21
+ ---
22
+
23
+ # 🧬 [OpenMed-NER-GenomicDetect-SuperMedical-355M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-SuperMedical-355M)
24
+
25
+ **Specialized model for Gene Entity Recognition - Gene-related entities**
26
+
27
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
28
+ [![Python](https://img.shields.io/badge/Python-3.8%2B-blue)]()
29
+ [![Transformers](https://img.shields.io/badge/🤗-Transformers-yellow)]()
30
+ [![OpenMed](https://img.shields.io/badge/🏥-OpenMed-green)](https://huggingface.co/OpenMed)
31
+
32
+ ## 📋 Model Overview
33
+
34
+ This model is a **state-of-the-art** fine-tuned transformer engineered to deliver **enterprise-grade accuracy** for gene entity recognition - gene-related entities. This specialized model excels at identifying and extracting biomedical entities from clinical texts, research papers, and healthcare documents, enabling applications such as **drug interaction detection**, **medication extraction from patient records**, **adverse event monitoring**, **literature mining for drug discovery**, and **biomedical knowledge graph construction** with **production-ready reliability** for clinical and research applications.
35
+
36
+ ### 🎯 Key Features
37
+ - **High Precision**: Optimized for biomedical entity recognition
38
+ - **Domain-Specific**: Trained on curated GELLUS dataset
39
+ - **Production-Ready**: Validated on clinical benchmarks
40
+ - **Easy Integration**: Compatible with Hugging Face Transformers ecosystem
41
+
42
+ ### 🏷️ Supported Entity Types
43
+
44
+ This model can identify and classify the following biomedical entities:
45
+
46
+ - `B-Cell-line-name`
47
+ - `I-Cell-line-name`
48
+
49
+ ## 📊 Dataset
50
+
51
+ Gellus corpus targets gene recognition and genetics entities for genomics and molecular biology applications.
52
+
53
+ The Gellus corpus is a biomedical NER dataset specifically designed for gene recognition and genetics entity extraction in molecular biology literature. This corpus contains comprehensive annotations for gene names, genetic variants, and genomics-related entities that are essential for genetic research and genomics applications. The dataset supports the development of automated systems for gene mention identification, genetic association studies, and genomics text mining. It is particularly valuable for identifying genes involved in hereditary diseases, genetic disorders, and molecular genetics research. The corpus serves as a benchmark for evaluating NER models used in genetics research, personalized medicine, and genomics informatics, contributing to advances in precision medicine and genetic counseling applications.
54
+
55
+
56
+ ## 📊 Performance Metrics
57
+
58
+ ### Current Model Performance
59
+ - **F1 Score**: `1.00`
60
+ - **Precision**: `1.00`
61
+ - **Recall**: `1.00`
62
+ - **Accuracy**: `1.00`
63
+
64
+ ### 🏆 Comparative Performance on GELLUS Dataset
65
+
66
+ | Rank | Model | F1 Score | Precision | Recall | Accuracy |
67
+ |------|-------|----------|-----------|--------|-----------|
68
+ | 🥇 1 | [OpenMed-NER-GenomicDetect-SnowMed-568M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-SnowMed-568M) | **0.9976** | 0.9977 | 0.9975 | 0.9989 |
69
+ | 🥈 2 | [OpenMed-NER-GenomicDetect-SuperMedical-355M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-SuperMedical-355M) | **0.9970** | 0.9960 | 0.9981 | 0.9986 |
70
+ | 🥉 3 | [OpenMed-NER-GenomicDetect-BigMed-560M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-BigMed-560M) | **0.9968** | 0.9967 | 0.9969 | 0.9986 |
71
+ | 4 | [OpenMed-NER-GenomicDetect-MultiMed-568M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-MultiMed-568M) | **0.9967** | 0.9974 | 0.9960 | 0.9985 |
72
+ | 5 | [OpenMed-NER-GenomicDetect-PubMed-109M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-PubMed-109M) | **0.9964** | 0.9957 | 0.9970 | 0.9992 |
73
+ | 6 | [OpenMed-NER-GenomicDetect-PubMed-335M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-PubMed-335M) | **0.9963** | 0.9961 | 0.9965 | 0.9991 |
74
+ | 7 | [OpenMed-NER-GenomicDetect-PubMed-109M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-PubMed-109M) | **0.9951** | 0.9948 | 0.9953 | 0.9991 |
75
+ | 8 | [OpenMed-NER-GenomicDetect-BioMed-109M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-BioMed-109M) | **0.9941** | 0.9934 | 0.9949 | 0.9988 |
76
+ | 9 | [OpenMed-NER-GenomicDetect-TinyMed-82M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-TinyMed-82M) | **0.9940** | 0.9997 | 0.9884 | 0.9961 |
77
+ | 10 | [OpenMed-NER-GenomicDetect-SuperMedical-125M](https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-SuperMedical-125M) | **0.9934** | 0.9999 | 0.9870 | 0.9958 |
78
+
79
+
80
+ *Rankings based on F1-score performance across all models trained on this dataset.*
81
+
82
+ ![OpenMed (open-source) vs. latest closed-source SOTA](https://huggingface.co/spaces/OpenMed/README/resolve/main/openmed_vs_sota_performance.png)
83
+
84
+ *Figure: OpenMed (Open-Source) vs. Latest SOTA (Closed-Source) performance comparison across biomedical NER datasets.*
85
+
86
+ ## 🚀 Quick Start
87
+
88
+ ### Installation
89
+
90
+ ```bash
91
+ pip install transformers torch
92
+ ```
93
+
94
+ ### Usage
95
+
96
+ ```python
97
+ from transformers import pipeline
98
+
99
+ # Load the model and tokenizer
100
+ # Model: https://huggingface.co/OpenMed/OpenMed-NER-GenomicDetect-SuperMedical-355M
101
+ model_name = "OpenMed/OpenMed-NER-GenomicDetect-SuperMedical-355M"
102
+
103
+ # Create a pipeline
104
+ medical_ner_pipeline = pipeline(
105
+ model=model_name,
106
+ aggregation_strategy="simple"
107
+ )
108
+
109
+ # Example usage
110
+ text = "The BRCA2 gene is associated with hereditary breast cancer."
111
+ entities = medical_ner_pipeline(text)
112
+
113
+ print(entities)
114
+
115
+ token = entities[0]
116
+ print(text[token["start"] : token["end"]])
117
+ ```
118
+
119
+ NOTE: The `aggregation_strategy` parameter defines how token predictions are grouped into entities. For a detailed explanation, please refer to the [Hugging Face documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TokenClassificationPipeline.aggregation_strategy).
120
+
121
+ Here is a summary of the available strategies:
122
+ - **`none`**: Returns raw token predictions without any aggregation.
123
+ - **`simple`**: Groups adjacent tokens with the same entity type (e.g., `B-LOC` followed by `I-LOC`).
124
+ - **`first`**: For word-based models, if tokens within a word have different entity tags, the tag of the first token is assigned to the entire word.
125
+ - **`average`**: For word-based models, this strategy averages the scores of tokens within a word and applies the label with the highest resulting score.
126
+ - **`max`**: For word-based models, the entity label from the token with the highest score within a word is assigned to the entire word.
127
+
128
+ ### Batch Processing
129
+
130
+ For efficient processing of large datasets, use proper batching with the `batch_size` parameter:
131
+
132
+ ```python
133
+ texts = [
134
+ "The BRCA2 gene is associated with hereditary breast cancer.",
135
+ "Mutations in the CFTR gene cause cystic fibrosis.",
136
+ "The APOE gene variant affects Alzheimer's disease risk.",
137
+ "The HTT gene provides instructions for making a protein called huntingtin.",
138
+ "Sickle cell disease is caused by a mutation in the HBB gene.",
139
+ ]
140
+
141
+ # Efficient batch processing with optimized batch size
142
+ # Adjust batch_size based on your GPU memory (typically 8, 16, 32, or 64)
143
+ results = medical_ner_pipeline(texts, batch_size=8)
144
+
145
+ for i, entities in enumerate(results):
146
+ print(f"Text {i+1} entities:")
147
+ for entity in entities:
148
+ print(f" - {entity['word']} ({entity['entity_group']}): {entity['score']:.4f}")
149
+ ```
150
+
151
+ ### Large Dataset Processing
152
+
153
+ For processing large datasets efficiently:
154
+
155
+ ```python
156
+ from transformers.pipelines.pt_utils import KeyDataset
157
+ from datasets import Dataset
158
+ import pandas as pd
159
+
160
+ # Load your data
161
+ # Load a medical dataset from Hugging Face
162
+ from datasets import load_dataset
163
+
164
+ # Load a public medical dataset (using a subset for testing)
165
+ medical_dataset = load_dataset("BI55/MedText", split="train[:100]") # Load first 100 examples
166
+ data = pd.DataFrame({"text": medical_dataset["Completion"]})
167
+ dataset = Dataset.from_pandas(data)
168
+
169
+ # Process with optimal batching for your hardware
170
+ batch_size = 16 # Tune this based on your GPU memory
171
+ results = []
172
+
173
+ for out in medical_ner_pipeline(KeyDataset(dataset, "text"), batch_size=batch_size):
174
+ results.extend(out)
175
+
176
+ print(f"Processed {len(results)} texts with batching")
177
+
178
+ ```
179
+
180
+ ### Performance Optimization
181
+
182
+ **Batch Size Guidelines:**
183
+ - **CPU**: Start with batch_size=1-4
184
+ - **Single GPU**: Try batch_size=8-32 depending on GPU memory
185
+ - **High-end GPU**: Can handle batch_size=64 or higher
186
+ - **Monitor GPU utilization** to find the optimal batch size for your hardware
187
+
188
+ **Memory Considerations:**
189
+ ```python
190
+ # For limited GPU memory, use smaller batches
191
+ medical_ner_pipeline = pipeline(
192
+ model=model_name,
193
+ aggregation_strategy="simple",
194
+ device=0 # Specify GPU device
195
+ )
196
+
197
+ # Process with memory-efficient batching
198
+ for batch_start in range(0, len(texts), batch_size):
199
+ batch = texts[batch_start:batch_start + batch_size]
200
+ batch_results = medical_ner_pipeline(batch, batch_size=len(batch))
201
+ results.extend(batch_results)
202
+ ```
203
+
204
+ ## 📚 Dataset Information
205
+
206
+ - **Dataset**: GELLUS
207
+ - **Description**: Gene Entity Recognition - Gene-related entities
208
+
209
+ ### Training Details
210
+ - **Base Model**: roberta-large
211
+ - **Training Framework**: Hugging Face Transformers
212
+ - **Optimization**: AdamW optimizer with learning rate scheduling
213
+ - **Validation**: Cross-validation on held-out test set
214
+
215
+ ## 🔬 Model Architecture
216
+
217
+ - **Base Architecture**: roberta-large
218
+ - **Task**: Token Classification (Named Entity Recognition)
219
+ - **Labels**: Dataset-specific entity types
220
+ - **Input**: Tokenized biomedical text
221
+ - **Output**: BIO-tagged entity predictions
222
+
223
+ ## 💡 Use Cases
224
+
225
+ This model is particularly useful for:
226
+ - **Clinical Text Mining**: Extracting entities from medical records
227
+ - **Biomedical Research**: Processing scientific literature
228
+ - **Drug Discovery**: Identifying chemical compounds and drugs
229
+ - **Healthcare Analytics**: Analyzing patient data and outcomes
230
+ - **Academic Research**: Supporting biomedical NLP research
231
+
232
+ ## 📜 License
233
+
234
+ Licensed under the Apache License 2.0. See [LICENSE](https://www.apache.org/licenses/LICENSE-2.0) for details.
235
+
236
+ ## 🤝 Contributing
237
+
238
+ We welcome contributions of all kinds! Whether you have ideas, feature requests, or want to join our mission to advance open-source Healthcare AI, we'd love to hear from you.
239
+
240
+ Follow [OpenMed Org](https://huggingface.co/OpenMed) on Hugging Face 🤗 and click "Watch" to stay updated on our latest releases and developments.
241
+
242
+
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForTokenClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.2,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": 0.2,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.2,
11
+ "hidden_size": 1024,
12
+ "id2label": {
13
+ "0": "B-Cell-line-name",
14
+ "1": "I-Cell-line-name",
15
+ "2": "O"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 4096,
19
+ "label2id": {
20
+ "B-Cell-line-name": 0,
21
+ "I-Cell-line-name": 1,
22
+ "O": 2
23
+ },
24
+ "layer_norm_eps": 1e-07,
25
+ "max_position_embeddings": 514,
26
+ "model_type": "roberta",
27
+ "num_attention_heads": 16,
28
+ "num_hidden_layers": 24,
29
+ "pad_token_id": 1,
30
+ "position_embedding_type": "absolute",
31
+ "torch_dtype": "bfloat16",
32
+ "transformers_version": "4.51.2",
33
+ "type_vocab_size": 1,
34
+ "use_cache": true,
35
+ "vocab_size": 50265
36
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7adf4caf8f8058721d7aacd81f791176cba3a8cc277e4ee030140deabfc46fc3
3
+ size 708674574
openmed_vs_sota_grouped_bars.png ADDED

Git LFS Details

  • SHA256: 626b37d9b20c44e26c92a8b5bf774107393ae0ad0b482d8e7cb3dc31d960f611
  • Pointer size: 131 Bytes
  • Size of remote file: 497 kB
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
test_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "eval_accuracy": 0.9986388887079862,
3
+ "eval_f1": 0.9970394207562349,
4
+ "eval_loss": 0.2967251241207123,
5
+ "eval_precision": 0.9959817410312459,
6
+ "eval_recall": 0.9980993492687327
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "model_max_length": 512,
53
+ "pad_token": "<pad>",
54
+ "sep_token": "</s>",
55
+ "tokenizer_class": "RobertaTokenizer",
56
+ "trim_offsets": true,
57
+ "unk_token": "<unk>"
58
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff