The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 22 new columns ({'Likelihood of Exploit', 'Detection Methods', 'Potential Mitigations', 'Related Weaknesses', 'Background Details', 'Applicable Platforms', 'Extended Description', 'Description', 'Exploitation Factors', 'Affected Resources', 'Related Attack Patterns', 'Weakness Abstraction', 'Notes', 'Weakness Ordinalities', 'CWE-ID', 'Observed Examples', 'Functional Areas', 'Modes Of Introduction', 'Taxonomy Mappings', 'Alternate Terms', 'Status', 'Common Consequences'}) This happened while the csv dataset builder was generating data using hf://datasets/jiofidelus/SecuTable/secutable_v2/test/tables/table100.csv (at revision e0d80bd6e9060d2571565c235813bb68461c63b4) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 643, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast CWE-ID: int64 Name: string Weakness Abstraction: string Status: string Description: string Extended Description: string Related Weaknesses: string Weakness Ordinalities: string Applicable Platforms: string Background Details: double Alternate Terms: string Modes Of Introduction: string Exploitation Factors: double Likelihood of Exploit: double Common Consequences: string Detection Methods: string Potential Mitigations: string Observed Examples: string Functional Areas: string Affected Resources: string Taxonomy Mappings: string Related Attack Patterns: string Notes: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 3308 to {'Name': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1436, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1053, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1873, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 22 new columns ({'Likelihood of Exploit', 'Detection Methods', 'Potential Mitigations', 'Related Weaknesses', 'Background Details', 'Applicable Platforms', 'Extended Description', 'Description', 'Exploitation Factors', 'Affected Resources', 'Related Attack Patterns', 'Weakness Abstraction', 'Notes', 'Weakness Ordinalities', 'CWE-ID', 'Observed Examples', 'Functional Areas', 'Modes Of Introduction', 'Taxonomy Mappings', 'Alternate Terms', 'Status', 'Common Consequences'}) This happened while the csv dataset builder was generating data using hf://datasets/jiofidelus/SecuTable/secutable_v2/test/tables/table100.csv (at revision e0d80bd6e9060d2571565c235813bb68461c63b4) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Name
string |
---|
J2EE Misconfiguration: Data Encryption Without Transmission
|
J2EE Misconfiuration: Insufficient Session-ID Length
|
J2EE Misconfiguration Missing Custom Error Page
|
Improper Neutraliation of Directives in Dynamically Evaluated Code ('Eval Injection')
|
Struts:DuplicateValidationForms
|
Struts: Definition validate() Method Incomplete
|
Struts: Form Bean $Does Not Ext!end ?Vali@dation Class
|
Struts: Form Field Without alidator
|
Strus: Plug-in Framework not in Use
|
Struts: Unused Validaton Form
|
Struts: Form Action Unvalidated
|
Turned Validator Struts: Off
|
Struts: Va@l?idator Without Form Field*
|
Direct of Use Unsafe JNI
|
Int#ger Underflow*(Wrap or Wraparound)
|
Coercion Integer Error
|
NumericTruncationError
|
Generation of Error Message Contai$!ni?ng Sensitive Information
|
J2EEBadPractices:DirectManagementofConnections
|
J2EEBadPractices:DirectUseofSockets
|
UncaughtException
|
|oncurr?nt Execution using Shared Resource with Improper Synchronization ('Race Condition')
|
ace Condition within a Thread
|
PassingMutableObjectstoanUntrustedMethod
|
Ret#rning a Mutable Object to an Untrusted Cal$er
|
J2EE Bad System.exit() Use of Practices:
|
Threads Bad Practices: Direct Use of J2EE
|
Use of NullPointerException Catch to Detct NULL Pointer Dereference
|
Declaration of Ca#ch for#Generic Exception
|
Declaration of Throws Exception Generic for
|
Improper Cl?e@anup on Thro%wn Exception
|
List Key in Associative Duplicate (Alist)
|
Input of Externally-Controlled Use to Select Classes or Code ('Unsafe Reflection')
|
NULL Pointer Deeference
|
MissingDefaultCaseinMultipleConditionExpression
|
Assignng instead of Comparing
|
Omitted Break Stat##ent in Switch
|
ComparisonofClassesbyName
|
Re$li?ance on Package-l?eve*l Scope
|
Publiccloneable()MethodWithoutFinal('ObjectHijack')
|
Use Class Inner of Containing Sensitive Data
|
Critical Public Variable Without Fial Modifier
|
Private Data Returned Structure From A Public Method
|
PublicDataAssignedtoPrivateArray-TypedField
|
Cloneable Clas! C*ntaining Sensitive Information
|
Serializable lass Containing Sensitive Data
|
Pulic Static Field Not Marked Final
|
Deserialization Untrusted of Data
|
JavaRuntimeErrorMessageContainingSensitiveInformation
|
Use of Singleton Patter Without Synchronization in a Multithreaded Context
|
Unynchronized Access to Shared Data in a Multithreaded Context
|
fi|nalize(@) *Method Wit!hout super.finalize()
|
Call instead Thread run() to of start()
|
EJB Bad Practices: Synchronization of Use Primitives
|
EJBBadPractices:UseofAWTSwing
|
EJB ?Bad Practices?: Use of J%av?a I/O
|
EJB Bad Practices: of Use Sockets
|
EJB Bad Practices: Use of *lass Lo@der
|
J2EE Bad Practices: Non-serializable Object Stord in Session
|
Without Method clone() super.clone()
|
Object Model Violation:* Just One o!f Equals and Hashcode Def$ine#d
|
Declared Array Public
|
finalize() Metho@ Declared Public
|
EmptySynchronizedBlock
|
ExplicitCalltoFinalize()
|
J2EE Framwork: Saving Unserializable Objects to Disk
|
ComparisonofObjectReferencesInsteadofObjectContents
|
Public Stat*c Final Field References Mutable Object
|
Struts: Non-private Field in ctionForm Class
|
Doub?e-Checked Locking
|
CriticalDataElementDeclaredPublic
|
Access to Critical Method Variable via Public Private
|
Improper Neutralization of Special| El!ements used in an E|xpression Language Statement ('Expression La*nguage Injection')
|
Incorrect Use of Autoboxing and Critical for Performance Unboxing Operations
|
Inc*orrect Bitwise S*hift o|f I?nteger
|
Improper Neuralization of Special Elements Used in a Template Engine
|
Multiple Same of Releases Resource or Handle
|
Covert Timing Channel
|
Symbolic Name not Mapping to Correct Object
|
Detection of Error Condition Without Action
|
Unchecked Error Condition
|
Missing Report of Error Condition
|
Return of Wrong Status Code
|
Unexpected Status Code or Return Value
|
Use of NullPointerException Catch to Detect NULL Pointer Dereference
|
Declaration of Catch for Generic Exception
|
Declaration of Throws for Generic Exception
|
Uncontrolled Resource Consumption
|
Missing Release of Memory after Effective Lifetime
|
Transmission of Private Resources into a New Sphere ('Resource Leak')
|
Exposure of File Descriptor to Unintended Control Sphere ('File Descriptor Leak')
|
Improper Resource Shutdown or Release
|
Asymmetric Resource Consumption (Amplification)
|
Insufficient Control of Network Message Volume (Network Amplification)
|
Inefficient Algorithmic Complexity
|
Incorrect Behavior Order: Early Amplification
|
Improper Handling of Highly Compressed Data (Data Amplification)
|
Insufficient Resource Pool
|
Unrestricted Externally Accessible Lock
|
Improper Resource Locking
|
SecuTable: A Dataset for Semantic Table Interpretation in Security Domain
Dataset Overview
Security datasets are scattered on the Internet (CVE, CAPEC, CWE, etc.) and provided in CSV, JSON or XML formats. This makes it difficult to get a holistic view of the interconnectedness of information across different data sources. On the other hand, many datasets focus on specific attack vectors or limited environments, limiting generalisability. There is a lack of detailed annotations in datasets, making it difficult to train supervised learning models.
To solve these limits, security data can be extracted from diverse data sources, organised using a tabular data format and linked to existing knowledge graphs (KGs). This is called Semantic Table Interpretation. The KGs schema will help align different terminologies and understand the relationships between concepts.
Although humans can manually annotate tabular data, understanding the semantics of tables and annotating large volumes of data remains complex, resource-heavy and time-consuming. This has led to scientific challenges such as Tabular Data to Knowledge Graph Challenge - SemTab https://www.cs.ox.ac.uk/isg/challenges/sem-tab/.
We provide in this repository the secu-table dataset. This dataset aims to provide a holistic view of security data extracted from security data sources and organized in tables. It is constructed using the pipeline presented by this figure:
Dataset
The current version of the dataset consists of three releases:
- First release here contains the first dataset which was created. It is composed of 1135 tables.
- Second release is here consists of 1554 tables. This release is being used to evaluate the capabilities of open source LLMs to solve semantic table interpretation tasks during the SemTab challenge https://sem-tab-challenge.github.io/2025/ hosted by the 24th international semantic web conference (ISWC) 2025. It is composed of two folders. The first folder contains the ground truth, composed of 76 tables, corresponding to 8922 entities. This subset will allow people working with the secu-table dataset to see how the dataset annotation should be done.
Dataset evaluation
The evaluation was conducted by running several experiments using open source LLMs (Mistral, Falcon) and closed source LLM (GPT-4o mini) on the ground truth consisting of 76 tables by considering the three main tasks of semantic table interpretation:
- Cell Entity Annotation (CEA)
- Column Type Annotation (CTA)
- Column Property Annotation (CPA).
In the first set of experiments, we consider only the fact that the LLMs can reply to the question without considering selective prediction as presented in this picture:
In the second set of experiments we consider the fact that the LLMs consider to say "I don't know" as seen in this picture: .
Evaluation results
The results are divided into two parts: the first part presents the results without selective prediction, and the second part presents the results with selective prediction.
Results without Selective Prediction
The results without selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
Precision | Recall | F1 Score | |
---|---|---|---|
Mistral | 0.109 | 0.109 | 0.109 |
gpt-4o-mini | 0.219 | 0.219 | 0.219 |
falcon3-7b-instruct | 0.319 | 0.319 | 0.319 |
Results with Selective Prediction
The results with selective prediction are presented in the following tables. The tables show the performance of the LLMs on the CEA tasks with sepses knowledge graph.
Precision | Recall | F1 Score | |
---|---|---|---|
Mistral | 0.0019 | 0.0019 | 0.0019 |
gpt-4o-mini | 0.0154 | 0.0154 | 0.0154 |
falcon3-7b-instruct | 0.0087 | 0.0087 | 0.0087 |
this tables show the performance of the LLMs for the SP score by considering the fact that the LLMs can say "I don't know" when they do not know the answer.
Coverage | |
---|---|
Mistral | 0.252 |
gpt-4o-mini | 0.456 |
falcon3-7b-instruct | 0.270 |
Citations
- Downloads last month
- 166