Commit
·
1df610d
1
Parent(s):
3a4edcd
update readme
Browse files- README.md +73 -0
- images/Prompt_without_selective_prediction_gpt.png +3 -0
- images/curation_process.png +3 -0
- images/promp_without_selective_prediction.png +3 -0
- images/prompt_with_selective_prediction_gpt.png +3 -0
- images/prompt_with_selective_prediction_mistral_falcon.png +3 -0
- images/result_without_selective_prediction.png +3 -0
README.md
CHANGED
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SecuTable: A Dataset for Semantic Table Interpretation in Security Domain
|
2 |
+
|
3 |
+
## Dataset Overview
|
4 |
+
|
5 |
+
Security datasets are scattered on the Internet (CVE, CAPEC, CWE, etc.) and provided in CSV, JSON or XML formats. This makes it difficult to get a holistic view of the interconnectedness of information across different data sources. On the other hand, many datasets focus on specific attack vectors or limited environments, limiting generalisability and is a lack of detailed annotations in datasets, making it difficult to train supervised learning models.
|
6 |
+
|
7 |
+
To solve these limits, security data can be extracted from diverse data sources, organised using a tabular data format and linked to existing knowledge graphs (KGs). This is called Semantic Table Interpretation. The KGs schema will help align different terminologies and understand the relationships between concepts.
|
8 |
+
|
9 |
+
Although humans can manually annotate tabular data, understanding the semantics of tables and annotating large volumes of data remains complex, resource-heavy and time-consuming. This has led to scientific challenges such as Tabular Data to Knowledge Graph Challenge - SemTab [https://www.cs.ox.ac.uk/isg/challenges/sem-tab/](https://www.cs.ox.ac.uk/isg/challenges/sem-tab/).
|
10 |
+
|
11 |
+
We provide in this repository the secu-table. This dataset aims to provide a holistic view of security data extracted from security data sources and organized in tables. It is constructed using the pipeline presented by this figure: 
|
12 |
+
|
13 |
+
## Dataset
|
14 |
+
The current version of the dataset consists of three releases:
|
15 |
+
- First release here [https://huggingface.co/datasets/jiofidelus/SecuTable/tree/v1.0](https://huggingface.co/datasets/jiofidelus/SecuTable/tree/v1.0) contains the first dataset which was created. It is composed of 1135 tables.
|
16 |
+
- Second release is here [https://huggingface.co/datasets/jiofidelus/SecuTable/tree/main](https://huggingface.co/datasets/jiofidelus/SecuTable/tree/main) consists of 1554 tables. This release is being used to evaluate the capabilities of open source LLMs to solve semantic table interpretation tasks during the SemTab challenge [https://sem-tab-challenge.github.io/2025/](https://sem-tab-challenge.github.io/2025/) hosted by the 24th international semantic web conference (ISWC) 2025. It is composed of two folders. The first folder contains the ground truth, composed of 76 tables, corresponding to 8922 entities. This subset will allow people working with the secu-table dataset to see how the dataset annotation should be done.
|
17 |
+
|
18 |
+
## Dataset evaluation
|
19 |
+
The evaluation was conducted by running several experiments using open source LLMs
|
20 |
+
- Mistral
|
21 |
+
- Falcon
|
22 |
+
|
23 |
+
and close source LLM
|
24 |
+
- GPT-4o mini
|
25 |
+
|
26 |
+
on the ground truth by considering the three main tasks of semantic table interpretation:
|
27 |
+
- Cell Entity Annotation (CEA)
|
28 |
+
- Column Type Annotation
|
29 |
+
- Column Property Annotation.
|
30 |
+
|
31 |
+
In the first set of experiments, we consider only the fact that the LLMs can reply to the question without considering selective prediction as presented in this picture: 
|
32 |
+
|
33 |
+
In the second set of experiments we consider the fact that the LLMs consider to say "I don't know" as seen in this picture: .
|
34 |
+
|
35 |
+
<!-- \begin{lstlisting}[caption={Prompt used when the LLM do not consider the selective prediction with Mistral and Falcon}, label={code:PromptNonSelective}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
|
36 |
+
Put the prompt here (Jean)
|
37 |
+
\end{lstlisting}
|
38 |
+
|
39 |
+
\begin{lstlisting}[caption={Prompt used when the LLM do not consider the selective prediction with GPT-4o mini}, label={code:SPARQLCPA}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
|
40 |
+
Put the prompt here (Jean)
|
41 |
+
\end{lstlisting}
|
42 |
+
|
43 |
+
\begin{lstlisting}[caption={Prompt used when the LLM consider the selective prediction with Mistral and Falcon}, label={code:SPARQLCPA}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
|
44 |
+
Put the prompt here (Jean)
|
45 |
+
\end{lstlisting}
|
46 |
+
|
47 |
+
\begin{lstlisting}[caption={Prompt used when the LLM consider the selective prediction with GPT-4o mini}, label={code:SPARQLCPA}, basicstyle=\ttfamily\small, backgroundcolor=\color{gray!10}]
|
48 |
+
Put the prompt here (Jean)
|
49 |
+
\end{lstlisting} -->
|
50 |
+
|
51 |
+
## Evaluation results
|
52 |
+
The evaluation results are presented in the following sections. The results are divided into two parts: the first part presents the results without selective prediction, and the second part presents the results with selective prediction.
|
53 |
+
|
54 |
+
### Results without Selective Prediction
|
55 |
+
The results without selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
|
56 |
+
| | Precision | Recall | F1 Score |
|
57 |
+
|----------|-----------|-----------|-----------|
|
58 |
+
|Mistral| 0.109 | 0.109 | 0.109 |
|
59 |
+
|gpt-4o-mini | 0.219 | 0.219 | 0.219 |
|
60 |
+
|falcon3-7b-instruct | 0.319 | 0.319 | 0.319 |
|
61 |
+
|
62 |
+
<!--  -->
|
63 |
+
|
64 |
+
### Results with Selective Prediction
|
65 |
+
|
66 |
+
The results with selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
|
67 |
+
| | Precision | Recall | F1 Score |
|
68 |
+
|----------|-----------|-----------|-----------|
|
69 |
+
|Mistral| 0.0019 | 0.0019 | 0.0019 |
|
70 |
+
|gpt-4o-mini | 0.0154 | 0.0154 | 0.0154 |
|
71 |
+
|falcon3-7b-instruct | 0.0087| 0.0087 | 0.0087 |
|
72 |
+
|
73 |
+
\section{Citation}
|
images/Prompt_without_selective_prediction_gpt.png
ADDED
![]() |
Git LFS Details
|
images/curation_process.png
ADDED
![]() |
Git LFS Details
|
images/promp_without_selective_prediction.png
ADDED
![]() |
Git LFS Details
|
images/prompt_with_selective_prediction_gpt.png
ADDED
![]() |
Git LFS Details
|
images/prompt_with_selective_prediction_mistral_falcon.png
ADDED
![]() |
Git LFS Details
|
images/result_without_selective_prediction.png
ADDED
![]() |
Git LFS Details
|