cosmetic changes to docs
Browse files
README.md
CHANGED
|
@@ -9,19 +9,15 @@ tags:
|
|
| 9 |
- events
|
| 10 |
pretty_name: Events
|
| 11 |
---
|
| 12 |
-
|
|
|
|
| 13 |
<img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner"
|
| 14 |
style="display: block; margin: 0 auto; width: 500px; height: auto;">
|
| 15 |
<h1 style="text-align: center; margin-top: 1em;">Events Domain Ontologies</h1>
|
|
|
|
| 16 |
</div>
|
| 17 |
|
| 18 |
-
<div align="center">
|
| 19 |
-
|
| 20 |
-
[](https://github.com/sciknoworg/OntoLearner)
|
| 21 |
-
[](https://pypi.org/project/OntoLearner/)
|
| 22 |
-
[](https://ontolearner.readthedocs.io/benchmarking/benchmark.html)
|
| 23 |
|
| 24 |
-
</div>
|
| 25 |
|
| 26 |
## Overview
|
| 27 |
The events domain encompasses the structured representation and semantic modeling of occurrences in time, including their temporal, spatial, and contextual attributes. This domain is pivotal in knowledge representation as it facilitates the interoperability and integration of event-related data across diverse systems, enabling precise scheduling, planning, and historical analysis. By providing a framework for understanding and linking events, this domain supports advanced applications in areas such as artificial intelligence, information retrieval, and decision support systems.
|
|
@@ -36,23 +32,40 @@ The events domain encompasses the structured representation and semantic modelin
|
|
| 36 |
## Dataset Files
|
| 37 |
Each ontology directory contains the following files:
|
| 38 |
1. `<ontology_id>.<format>` - The original ontology file
|
| 39 |
-
2. `term_typings.json` - Dataset of term
|
| 40 |
3. `taxonomies.json` - Dataset of taxonomic relations
|
| 41 |
4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
|
| 42 |
5. `<ontology_id>.rst` - Documentation describing the ontology
|
| 43 |
|
|
|
|
| 44 |
## Usage
|
| 45 |
These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
|
| 46 |
|
| 47 |
-
|
| 48 |
-
from ontolearner.ontology import Wine
|
| 49 |
-
from ontolearner.utils.train_test_split import train_test_split
|
| 50 |
-
from ontolearner.learner_pipeline import LearnerPipeline
|
| 51 |
|
| 52 |
-
|
| 53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
data = ontology.extract()
|
| 57 |
|
| 58 |
# Split into train and test sets
|
|
@@ -60,10 +73,10 @@ train_data, test_data = train_test_split(data, test_size=0.2)
|
|
| 60 |
|
| 61 |
# Create a learning pipeline (for RAG-based learning)
|
| 62 |
pipeline = LearnerPipeline(
|
| 63 |
-
task="term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
|
| 64 |
-
retriever_id="sentence-transformers/all-MiniLM-L6-v2",
|
| 65 |
-
llm_id="mistralai/Mistral-7B-Instruct-v0.1",
|
| 66 |
-
hf_token="your_huggingface_token" # Only needed for gated models
|
| 67 |
)
|
| 68 |
|
| 69 |
# Train and evaluate
|
|
@@ -75,5 +88,21 @@ results, metrics = pipeline.fit_predict_evaluate(
|
|
| 75 |
)
|
| 76 |
```
|
| 77 |
|
| 78 |
-
For more detailed
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
- events
|
| 10 |
pretty_name: Events
|
| 11 |
---
|
| 12 |
+
|
| 13 |
+
<div align="center">
|
| 14 |
<img src="https://raw.githubusercontent.com/sciknoworg/OntoLearner/main/images/logo.png" alt="OntoLearner"
|
| 15 |
style="display: block; margin: 0 auto; width: 500px; height: auto;">
|
| 16 |
<h1 style="text-align: center; margin-top: 1em;">Events Domain Ontologies</h1>
|
| 17 |
+
<a href="https://github.com/sciknoworg/OntoLearner"><img src="https://img.shields.io/badge/GitHub-OntoLearner-blue?logo=github" /></a>
|
| 18 |
</div>
|
| 19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
|
|
|
| 21 |
|
| 22 |
## Overview
|
| 23 |
The events domain encompasses the structured representation and semantic modeling of occurrences in time, including their temporal, spatial, and contextual attributes. This domain is pivotal in knowledge representation as it facilitates the interoperability and integration of event-related data across diverse systems, enabling precise scheduling, planning, and historical analysis. By providing a framework for understanding and linking events, this domain supports advanced applications in areas such as artificial intelligence, information retrieval, and decision support systems.
|
|
|
|
| 32 |
## Dataset Files
|
| 33 |
Each ontology directory contains the following files:
|
| 34 |
1. `<ontology_id>.<format>` - The original ontology file
|
| 35 |
+
2. `term_typings.json` - A Dataset of term-to-type mappings
|
| 36 |
3. `taxonomies.json` - Dataset of taxonomic relations
|
| 37 |
4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
|
| 38 |
5. `<ontology_id>.rst` - Documentation describing the ontology
|
| 39 |
|
| 40 |
+
|
| 41 |
## Usage
|
| 42 |
These datasets are intended for ontology learning research and applications. Here's how to use them with OntoLearner:
|
| 43 |
|
| 44 |
+
First of all, install the `OntoLearner` library via PiP:
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
+
```bash
|
| 47 |
+
pip install ontolearner
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
**How to load an ontology or LLM4OL Paradigm tasks datasets?**
|
| 51 |
+
``` python
|
| 52 |
+
from ontolearner import Conference
|
| 53 |
+
|
| 54 |
+
ontology = Conference()
|
| 55 |
|
| 56 |
+
# Load an ontology.
|
| 57 |
+
ontology.load()
|
| 58 |
+
|
| 59 |
+
# Load (or extract) LLMs4OL Paradigm tasks datasets
|
| 60 |
+
data = ontology.extract()
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
**How use the loaded dataset for LLM4OL Paradigm task settings?**
|
| 64 |
+
``` python
|
| 65 |
+
from ontolearner import Conference, LearnerPipeline, train_test_split
|
| 66 |
+
|
| 67 |
+
ontology = Conference()
|
| 68 |
+
ontology.load()
|
| 69 |
data = ontology.extract()
|
| 70 |
|
| 71 |
# Split into train and test sets
|
|
|
|
| 73 |
|
| 74 |
# Create a learning pipeline (for RAG-based learning)
|
| 75 |
pipeline = LearnerPipeline(
|
| 76 |
+
task = "term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
|
| 77 |
+
retriever_id = "sentence-transformers/all-MiniLM-L6-v2",
|
| 78 |
+
llm_id = "mistralai/Mistral-7B-Instruct-v0.1",
|
| 79 |
+
hf_token = "your_huggingface_token" # Only needed for gated models
|
| 80 |
)
|
| 81 |
|
| 82 |
# Train and evaluate
|
|
|
|
| 88 |
)
|
| 89 |
```
|
| 90 |
|
| 91 |
+
For more detailed documentation, see the [](https://ontolearner.readthedocs.io)
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
## Citation
|
| 95 |
+
|
| 96 |
+
If you find our work helpful, feel free to give us a cite.
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
```bibtex
|
| 100 |
+
@inproceedings{babaei2023llms4ol,
|
| 101 |
+
title={LLMs4OL: Large language models for ontology learning},
|
| 102 |
+
author={Babaei Giglou, Hamed and D’Souza, Jennifer and Auer, S{\"o}ren},
|
| 103 |
+
booktitle={International Semantic Web Conference},
|
| 104 |
+
pages={408--427},
|
| 105 |
+
year={2023},
|
| 106 |
+
organization={Springer}
|
| 107 |
+
}
|
| 108 |
+
```
|