dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 3243541
num_examples: 4000
download_size: 2050955
dataset_size: 3243541
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- table-question-answering
- question-answering
- text-generation
language:
- en
size_categories:
- 1K<n<10K
Databricks-Dolly-4k
The resulting dataset contains 4000 samples of the databricks/databricks-dolly-15k dataset.
This split of an even smaller subset is provided for very fast experimentation and evaluation of models when computational resources are highly limited or for quick prototyping.
Dataset Structure
The dataset is provided as a DatasetDict
with the following splits:
train
: Contains 4000 samples.
Each split contains the following features, identical to the original dataset:
id
: The unique identifier for each sample.instruction
: The instruction or prompt for the task.response
: The response to the given instruction.context
: Additional context or information related to the instruction.source
: The source of the sample.
Usage
You can easily load this split dataset using the datasets
library:
from datasets import load_dataset
databricks_dolly_4k = load_dataset("Vishva007/Databricks-Dolly-4k")
print(databricks_dolly_4k)
print(databricks_dolly_4k["train"][0])
Example Usage
Here’s an example of how you might use this dataset in a Python script:
from datasets import load_dataset
# Load the dataset
databricks_dolly_4k = load_dataset("Vishva007/Databricks-Dolly-4k")
# Print the first sample in the training set
print(databricks_dolly_4k["train"][0])
# Access specific fields from the first sample
sample = databricks_dolly_4k["train"][0]
print(f"ID: {sample['id']}")
print(f"Instruction: {sample['instruction']}")
print(f"Response: {sample['response']}")
print(f"Context: {sample['context']}")
print(f"Source: {sample['source']}")
Dataset Info
Features
id
: The unique identifier for each sample.instruction
: The instruction or prompt for the task.response
: The response to the given instruction.context
: Additional context or information related to the instruction.source
: The source of the sample.
Splits
train
: Contains 4000 samples.
Metadata
- Download Size: 1310809668 bytes
- Dataset Size: 1323148760.0 bytes
License
This dataset is derived from the databricks/databricks-dolly-15k dataset, which is licensed under the Apache License 2.0.
For more details about the original dataset, please refer to the official documentation.