language: en
tags:
- log-analysis
- hdfs
- anomaly-detection
license: mit
HDFS Logs Train/Val/Test Splits
This dataset contains preprocessed HDFS log sequences split into train, validation, and test sets for anomaly detection tasks.
Dataset Description
The dataset is derived from the HDFS log dataset, which contains system logs from a Hadoop Distributed File System (HDFS). Each sequence represents a block of log messages, labeled as either normal or anomalous. The dataset has been preprocessed using the Drain algorithm to extract structured fields and identify event types.
Data Fields
block_id: Unique identifier for each HDFS block, used to group log messages into blocksevent_encoded: The preprocessed log sequence with event IDs and parameterstokenized_block: The tokenized log sequence, used for traininglabel: Classification label ('Normal' or 'Anomaly')
Data Splits
- Training set: 460,049 sequences (80%)
- Validation set: 57,506 sequences (10%)
- Test set: 57,506 sequences (10%)
The splits are stratified by the Label field to maintain class distribution across splits.
Source Data
Original data source: https://zenodo.org/records/8196385/files/HDFS_v1.zip?download=1
Preprocessing
The logs have been preprocessed to:
- Extract structured fields (timestamp, component, etc.)
- Identify and standardize event types
- Remove block IDs from parameter lists to prevent data leakage
- Add special tokens for event type separation
Intended Uses
This dataset is designed for:
- Training log anomaly detection models
- Evaluating log sequence prediction models
- Benchmarking different approaches to log-based anomaly detection
Citation
If you use this dataset, please cite the original HDFS paper:
@inproceedings{xu2009detecting,
title={Detecting large-scale system problems by mining console logs},
author={Xu, Wei and Huang, Ling and Fox, Armando and Patterson, David and Jordan, Michael I},
booktitle={Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles},
pages={117--132},
year={2009}
}