File size: 5,650 Bytes
70c896a
 
 
 
 
 
5b03f11
 
70c896a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3119764
6e2e088
3119764
6e2e088
3119764
 
6e2e088
3119764
6e2e088
 
 
 
3119764
6e2e088
3119764
 
 
6e2e088
 
 
 
3119764
 
 
 
 
6e2e088
 
 
 
3119764
 
 
6e2e088
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3119764
6e2e088
 
 
 
 
3119764
feae76c
 
 
b1eb5a6
 
feae76c
b1eb5a6
feae76c
b1eb5a6
feae76c
b1eb5a6
feae76c
 
 
 
 
 
 
 
3119764
 
6e2e088
 
3119764
 
6e2e088
 
 
 
 
 
 
 
 
 
 
3119764
649b256
6e2e088
3119764
6e2e088
 
 
 
 
 
649b256
6e2e088
 
 
 
 
 
 
 
3119764
6e2e088
70c896a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
---
dataset_name: "hlo-feature-dataset"
pretty_name: "HLO Feature Dataset for Deep Learning Resource Estimation"
dataset_type: "graph-and-tabular"
license: "apache-2.0"
task_categories:
  - graph-ml
  - tabular-regression
language: "en"
tags:
  - HPC
  - resource-prediction
  - XLA
  - compiler-features
  - deep-learning
  - graph-learning
  - scheduling
size_categories:
  - 1K<n<10K
source_datasets:
  - custom
dataset_summary: >
  The HLO Feature Dataset contains High-Level Optimizer (HLO) graph features and metadata extracted 
  from deep learning training workloads. It is designed for tasks such as runtime prediction, resource 
  estimation, and graph-based machine learning in HPC environments.

  Each entry pairs model configuration metadata with compiler graph data stored in `.npz` format.
  
  Ideal for ML system optimization studies, GNN research, and AI workload scheduling.

structured_data:
  features:
    - name: "batch"
      type: "integer"
    - name: "epochs"
      type: "integer"
    - name: "learn_rate"
      type: "float"
    - name: "gpu_core_count"
      type: "integer"
    - name: "gpu_memory_size"
      type: "integer"
    - name: "fit_time"
      type: "float"
    - name: "npz_path"
      type: "string"
  graph_data:
    node_features: "node_feat"
    edge_index: "edge_index"
    additional_keys:
      - "node_opcode"
      - "node_config_ids"
      - "node_splits"
usage_example: |
  ```python
  from datasets import load_dataset
  import numpy as np

  dataset = load_dataset("your-username/hlo-feature-dataset")
  sample = dataset['train'][0]

  graph_data = np.load(sample['npz_path'])
  node_features = graph_data['node_feat']
  edges = graph_data['edge_index']

---

# HLO Feature Dataset for Deep Learning Resource Estimation

[![Dataset](https://img.shields.io/badge/HuggingFace-Dataset-blue)](https://huggingface.co/datasets/your-username/hlo-feature-dataset)

## Dataset Summary
The **HLO Feature Dataset** is a collection of compiler-level graph features (HLO graphs) extracted from deep learning training workloads. Alongside detailed metadata (model configs, GPU stats), this dataset enables machine learning approaches for:

- ⏱️ **Training Time Prediction**
- 📉 **Resource Consumption Estimation**
-**HPC and GPU Scheduling Optimization**
- 🧩 **Graph-based Neural Architecture Analysis**

This dataset is ideal for experimenting with regression models (e.g., XGBoost) and Graph Neural Networks (GNNs) using compiler features.

---

## Supported Tasks
- **⚙️ Runtime & Resource Prediction**: Predict training time (`fit_time`) based on HLO features.
- **📊 ML for Systems Optimization**: Use tabular + graph data for AI workload management.
- **🔗 Graph Representation Learning**: Apply GNNs on HLO graphs (`node_feat`, `edge_index`).

---

## Dataset Structure

Each entry includes:
- **Metadata**: From `dataset-new.csv` (model, optimizer, GPU specs, timing metrics, etc.)
- **HLO Graph Features**: `.npz` files containing:
  - `node_opcode`, `node_feat`, `edge_index`, `node_config_ids`, `node_splits`

---

## Usage Example

This example demonstrates how to load metadata, preprocess features, and train an XGBoost model to predict training time (`fit_time`), as shown in the Colab notebook.

```python
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from xgboost import XGBRegressor

# Load metadata CSV
df = pd.read_csv('dataset-new.csv')

# Example feature selection (drop non-numeric/categorical handling needed)
X = df[['batch', 'epochs', 'learn_rate', 'gpu_core_count', 'gpu_memory_size']]
y = df['fit_time']

# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Initialize XGBoost Regressor
xgb_model = XGBRegressor(n_estimators=100, learning_rate=0.1, max_depth=6, random_state=42)
xgb_model.fit(X_train, y_train)

# Evaluate
preds = xgb_model.predict(X_test)
rmse = mean_squared_error(y_test, preds, squared=False)
print(f"RMSE: {rmse}")
```


---

### Example Notebooks
#### 🚀 Baseline: XGBoost for Resource Estimation

A sample baseline implementation using **XGBoost** is provided to demonstrate how to predict resource metrics such as `fit_time` using the dataset's metadata.

📥 **Download the notebook** from the repository:

[Baseline_XGBoost_Resource_Estimation.ipynb](https://huggingface.co/datasets/ICICLE-AI/ResourceEstimation_HLOGenCNN/blob/main/Baseline_XGBoost_Resource_Estimation.ipynb)

This notebook covers:
- Loading and preprocessing metadata from `dataset-new.csv`
- Training an XGBoost regressor to predict training time
- Evaluating model performance (e.g., RMSE)

> ⚡ **Note:** Make sure to adjust paths if cloning the dataset locally or integrating with Hugging Face `datasets` API.

---

### Loading HLO Graph Features
For graph-based ML tasks, load the `.npz` files:

```python
npz_file = df.iloc[0]['npz_path']
graph_data = np.load(npz_file)

node_features = graph_data['node_feat']
edges = graph_data['edge_index']

print("Node Feature Shape:", node_features.shape)
print("Edge Index Shape:", edges.shape)
```

---

<!-- ## Citation
If you use this dataset, please cite:

```
@misc{hlofeatures2025,
  title={HLO Feature Dataset for AI Resource Estimation},
  author={Your Name},
  year={2025},
  url={https://huggingface.co/datasets/your-username/hlo-feature-dataset}
} -->
```

---

## License
Specify your license here (e.g., MIT, Apache-2.0).

---

## Contributions
Open to contributions! Feel free to suggest improvements or share your models trained on this dataset.