hamedbabaeigiglou commited on
Commit
6f1456b
·
verified ·
1 Parent(s): 0b11ff9

minor update to readme

Browse files
Files changed (1) hide show
  1. README.md +27 -18
README.md CHANGED
@@ -28,13 +28,6 @@ The geography domain encompasses the structured representation and analysis of s
28
  | GTS | Geologic Timescale model (GTS) | 40 | 12 | 2020-05-31|
29
  | Juso | Juso Ontology (Juso) | 30 | 24 | 2015-11-10|
30
 
31
- ## Dataset Files
32
- Each ontology directory contains the following files:
33
- 1. `<ontology_id>.<format>` - The original ontology file
34
- 2. `term_typings.json` - Dataset of term to type mappings
35
- 3. `taxonomies.json` - Dataset of taxonomic relations
36
- 4. `non_taxonomic_relations.json` - Dataset of non-taxonomic relations
37
- 5. `<ontology_id>.rst` - Documentation describing the ontology
38
 
39
  ## Dataset Files
40
  Each ontology directory contains the following files:
@@ -66,34 +59,50 @@ ontology.load()
66
  data = ontology.extract()
67
  ```
68
 
 
69
  **How use the loaded dataset for LLM4OL Paradigm task settings?**
70
  ``` python
 
71
  from ontolearner import GEO, LearnerPipeline, train_test_split
72
 
 
73
  ontology = GEO()
74
- ontology.load()
75
  data = ontology.extract()
76
 
77
  # Split into train and test sets
78
- train_data, test_data = train_test_split(data, test_size=0.2)
79
 
80
- # Create a learning pipeline (for RAG-based learning)
 
81
  pipeline = LearnerPipeline(
82
- task = "term-typing", # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
83
- retriever_id = "sentence-transformers/all-MiniLM-L6-v2",
84
- llm_id = "mistralai/Mistral-7B-Instruct-v0.1",
85
- hf_token = "your_huggingface_token" # Only needed for gated models
 
86
  )
87
 
88
- # Train and evaluate
89
- results, metrics = pipeline.fit_predict_evaluate(
90
  train_data=train_data,
91
  test_data=test_data,
92
- top_k=3,
93
- test_limit=10
 
94
  )
 
 
 
 
 
 
 
 
 
95
  ```
96
 
 
97
  For more detailed documentation, see the [![Documentation](https://img.shields.io/badge/Documentation-ontolearner.readthedocs.io-blue)](https://ontolearner.readthedocs.io)
98
 
99
 
 
28
  | GTS | Geologic Timescale model (GTS) | 40 | 12 | 2020-05-31|
29
  | Juso | Juso Ontology (Juso) | 30 | 24 | 2015-11-10|
30
 
 
 
 
 
 
 
 
31
 
32
  ## Dataset Files
33
  Each ontology directory contains the following files:
 
59
  data = ontology.extract()
60
  ```
61
 
62
+
63
  **How use the loaded dataset for LLM4OL Paradigm task settings?**
64
  ``` python
65
+ # Import core modules from the OntoLearner library
66
  from ontolearner import GEO, LearnerPipeline, train_test_split
67
 
68
+ # Load the GEO ontology, which contains concepts related to wines, their properties, and categories
69
  ontology = GEO()
70
+ ontology.load() # Load entities, types, and structured term annotations from the ontology
71
  data = ontology.extract()
72
 
73
  # Split into train and test sets
74
+ train_data, test_data = train_test_split(data, test_size=0.2, random_state=42)
75
 
76
+ # Initialize a multi-component learning pipeline (retriever + LLM)
77
+ # This configuration enables a Retrieval-Augmented Generation (RAG) setup
78
  pipeline = LearnerPipeline(
79
+ retriever_id='sentence-transformers/all-MiniLM-L6-v2', # Dense retriever model for nearest neighbor search
80
+ llm_id='Qwen/Qwen2.5-0.5B-Instruct', # Lightweight instruction-tuned LLM for reasoning
81
+ hf_token='...', # Hugging Face token for accessing gated models
82
+ batch_size=32, # Batch size for training/prediction if supported
83
+ top_k=5 # Number of top retrievals to include in RAG prompting
84
  )
85
 
86
+ # Run the pipeline: training, prediction, and evaluation in one call
87
+ outputs = pipeline(
88
  train_data=train_data,
89
  test_data=test_data,
90
+ evaluate=True, # Compute metrics like precision, recall, and F1
91
+ task='term-typing' # Specifies the task
92
+ # Other options: "taxonomy-discovery" or "non-taxonomy-discovery"
93
  )
94
+
95
+ # Print final evaluation metrics
96
+ print("Metrics:", outputs['metrics'])
97
+
98
+ # Print the total time taken for the full pipeline execution
99
+ print("Elapsed time:", outputs['elapsed_time'])
100
+
101
+ # Print all outputs (including predictions)
102
+ print(outputs)
103
  ```
104
 
105
+
106
  For more detailed documentation, see the [![Documentation](https://img.shields.io/badge/Documentation-ontolearner.readthedocs.io-blue)](https://ontolearner.readthedocs.io)
107
 
108