The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: AttributeError Message: 'str' object has no attribute 'items' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1067, in get_module { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1068, in <dictcomp> config_name: DatasetInfo.from_dict(dataset_info_dict) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 284, in from_dict return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names}) AttributeError: 'str' object has no attribute 'items'
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Overview
The domain of medicine encompasses the structured representation and systematic organization of clinical knowledge, including the classification and interrelation of diseases, pharmacological agents, therapeutic interventions, and biomedical data. This domain is pivotal for advancing healthcare research, facilitating interoperability among medical information systems, and enhancing decision-making processes through precise and comprehensive knowledge representation. By employing ontologies, this domain ensures a standardized and semantically rich framework that supports the integration and analysis of complex biomedical information.
Ontologies
Ontology ID | Full Name | Classes | Properties | Last Updated |
---|---|---|---|---|
BTO | BRENDA Tissue Ontology (BTO) | 6569 | 10 | 2021-10-26 |
DEB | Devices, Experimental scaffolds and Biomaterials Ontology (DEB) | 601 | 120 | Jun 2, 2021 |
DOID | Human Disease Ontology (DOID) | 15343 | 2 | 2024-12-18 |
ENM | Environmental Noise Measurement Ontology (ENM) | 26142 | 53 | 2025-02-17 |
MFOEM | Mental Functioning Ontology of Emotions - Emotion Module (MFOEM) | 637 | 22 | None |
NCIt | NCI Thesaurus (NCIt) | N/A | N/A | 2023-10-19 |
OBI | Ontology for Biomedical Investigations (OBI) | 9703 | 94 | 2025-01-09 |
PRotein | Protein Ontology (PRO) | N/A | N/A | 08:08:2024 |
Dataset Files
Each ontology directory contains the following files:
<ontology_id>.<format>
- The original ontology fileterm_typings.json
- A Dataset of term-to-type mappingstaxonomies.json
- Dataset of taxonomic relationsnon_taxonomic_relations.json
- Dataset of non-taxonomic relations<ontology_id>.rst
- Documentation describing the ontology
Usage
These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
First of all, install the OntoLearner
library via PiP:
pip install ontolearner
How to load an ontology or LLM4OL Paradigm tasks datasets?
from ontolearner import BTO
ontology = BTO()
# Load an ontology.
ontology.load()
# Load (or extract) LLMs4OL Paradigm tasks datasets
data = ontology.extract()
How use the loaded dataset for LLM4OL Paradigm task settings?
from ontolearner import BTO, LearnerPipeline, train_test_split
ontology = BTO()
ontology.load()
data = ontology.extract()
# Split into train and test sets
train_data, test_data = train_test_split(data, test_size=0.2)
# Create a learning pipeline (for RAG-based learning)
pipeline = LearnerPipeline(
task = "term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
retriever_id = "sentence-transformers/all-MiniLM-L6-v2",
llm_id = "mistralai/Mistral-7B-Instruct-v0.1",
hf_token = "your_huggingface_token" # Only needed for gated models
)
# Train and evaluate
results, metrics = pipeline.fit_predict_evaluate(
train_data=train_data,
test_data=test_data,
top_k=3,
test_limit=10
)
For more detailed documentation, see the
Citation
If you find our work helpful, feel free to give us a cite.
@inproceedings{babaei2023llms4ol,
title={LLMs4OL: Large language models for ontology learning},
author={Babaei Giglou, Hamed and D’Souza, Jennifer and Auer, S{\"o}ren},
booktitle={International Semantic Web Conference},
pages={408--427},
year={2023},
organization={Springer}
}
- Downloads last month
- 90