The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: AttributeError Message: 'str' object has no attribute 'items' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1664, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1621, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1068, in get_module { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1069, in <dictcomp> config_name: DatasetInfo.from_dict(dataset_info_dict) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 284, in from_dict return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names}) AttributeError: 'str' object has no attribute 'items'
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Overview
The scholarly_knowledge domain encompasses ontologies that systematically represent the intricate structures, processes, and governance mechanisms inherent in scholarly research, academic publications, and the supporting infrastructure. This domain is pivotal in facilitating the organization, retrieval, and dissemination of academic knowledge, thereby enhancing the efficiency and transparency of scholarly communication. By providing a formalized framework for knowledge representation, it supports interoperability and integration across diverse research disciplines and platforms.
Ontology ID | Full Name | Classes | Properties | Individuals |
---|---|---|---|---|
CSO | Computer Science Ontology (CSO) | 0 | 0 | 0 |
OPMW | Open Provenance Model for Workflows (OPMW) | 59 | 87 | 2 |
OBOE | Extensible Observation Ontology (OBOE) | 478 | 30 | 0 |
SWO | Software Ontology (SWO) | 2746 | 165 | 443 |
SEPIO | Scientific Evidence and Provenance Information Ontology (SEPIO) | 129 | 117 | 21 |
LexInfo | LexInfo (LexInfo) | 334 | 189 | 276 |
EXPO | Ontology of Scientific Experiments (EXPO) | 347 | 78 | 0 |
SPDocument | SMART Protocols Ontology: Document Module (SP-Document) | 400 | 43 | 45 |
SPWorkflow | SMART Protocols Ontology: Workflow Module (SP-Workflow) | 419 | 17 | 5 |
NFDIcore | National Research Data Infrastructure Ontology (NFDIcore) | 302 | 102 | 0 |
TribAIn | Tribology and Artificial Intelligence Ontology (TribAIn) | 241 | 64 | 21 |
DCAT | Data Catalog Vocabulary (DCAT) | 10 | 39 | 0 |
EURIO | EUropean Research Information Ontology (EURIO) | 44 | 111 | 0 |
Metadata4Ing | Metadata for Intelligent Engineering (Metadata4Ing) | 48 | 100 | 47 |
FRAPO | Funding, Research Administration and Projects Ontology (FRAPO) | 97 | 125 | 25 |
FRBRoo | Functional Requirements for Bibliographic Records - object-oriented (FRBRoo) | 83 | 0 | 0 |
DUO | Data Use Ontology (DUO) | 45 | 1 | 0 |
DataCite | DataCite Ontology (DataCite) | 19 | 10 | 70 |
Framester | Framester Ontology (Framester) | 59 | 77 | 0 |
CiTO | Citation Typing Ontology (CiTO) | 10 | 101 | 0 |
VOAF | Vocabulary of a Friend (VOAF) | 3 | 21 | 1 |
AIISO | Academic Institution Internal Structure Ontology (AIISO) | 22 | 0 | 0 |
PreMOn | Pre-Modern Ontology (PreMOn) | 15 | 16 | 0 |
PPlan | Ontology for Provenance and Plans (P-Plan) | 11 | 14 | 0 |
WiLD | Workflows in Linked Data (WiLD) | 16 | 0 | 4 |
Dataset Files
Each ontology directory contains the following files:
<ontology_id>.<format>
- The original ontology fileterm_typings.json
- Dataset of term to type mappingstaxonomies.json
- Dataset of taxonomic relationsnon_taxonomic_relations.json
- Dataset of non-taxonomic relations<ontology_id>.rst
- Documentation describing the ontology
Usage
These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
from ontolearner import LearnerPipeline, AutoLearnerLLM, Wine, train_test_split
# Load ontology (automatically downloads from Hugging Face)
ontology = Wine()
ontology.load()
# Extract the dataset
data = ontology.extract()
# Split into train and test sets
train_data, test_data = train_test_split(data, test_size=0.2)
# Create a learning pipeline (for RAG-based learning)
pipeline = LearnerPipeline(
task="term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
retriever_id="sentence-transformers/all-MiniLM-L6-v2",
llm_id="mistralai/Mistral-7B-Instruct-v0.1",
hf_token="your_huggingface_token" # Only needed for gated models
)
# Train and evaluate
results, metrics = pipeline.fit_predict_evaluate(
train_data=train_data,
test_data=test_data,
top_k=3,
test_limit=10
)
For more detailed examples, see the OntoLearner documentation.
Citation
If you use these ontologies in your research, please cite:
@software{babaei_giglou_2025,
author = {Babaei Giglou, Hamed and D'Souza, Jennifer and Aioanei, Andrei and Mihindukulasooriya, Nandana and Auer, Sören},
title = {OntoLearner: A Modular Python Library for Ontology Learning with LLMs},
month = may,
year = 2025,
publisher = {Zenodo},
version = {v1.0.1},
doi = {10.5281/zenodo.15399783},
url = {https://doi.org/10.5281/zenodo.15399783},
}
- Downloads last month
- 158