metadata
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: prompt_id
dtype: string
splits:
- name: train
num_bytes: 222353
num_examples: 400
download_size: 139530
dataset_size: 222353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset Card for "no_robots_test400"
This is a subset of "no_robots", selecting 400 questions from the test set.
category | messages |
---|---|
Brainstorm | 36 |
Chat | 101 |
Classify | 16 |
Closed QA | 15 |
Coding | 16 |
Extract | 7 |
Generation | 129 |
Open QA | 34 |
Rewrite | 21 |
Summarize | 25 |
Code:
import pandas as pd
import numpy as np
import numpy.random
from datasets import load_dataset, Dataset
from copy import deepcopy
def get_norobot_dataset():
ds = load_dataset('HuggingFaceH4/no_robots')
all_test_data = []
for sample in ds['test_sft']:
sample: dict
for i, message in enumerate(sample['messages']):
if message['role'] == 'user':
item = dict(
messages=deepcopy(sample['messages'][:i + 1]),
category=sample['category'],
prompt_id=sample['prompt_id'],
)
all_test_data.append(item)
return Dataset.from_list(all_test_data)
dataset = get_norobot_dataset().to_pandas()
dataset.groupby('category').count()
dataset['_sort_key'] = dataset['messages'].map(str)
dataset = dataset.sort_values(['_sort_key'])
subset = []
for category, group_df in sorted(dataset.groupby('category')):
n = int(len(group_df) * 0.603)
if n <= 20:
n = len(group_df)
indices = np.random.default_rng(seed=42).choice(len(group_df), size=n, replace=False)
subset.append(group_df.iloc[indices])
df = pd.concat(subset)
df = df.drop(columns=['_sort_key'])
df = df.reset_index(drop=True)
print(len(df))
print(df.groupby('category').count().to_string())
Dataset.from_pandas(df).push_to_hub('yujiepan/no_robots_test400')