Vishva007 commited on
Commit
0ff0389
·
verified ·
1 Parent(s): 995c73e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -23
README.md CHANGED
@@ -1,23 +1,115 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: instruction
5
- dtype: string
6
- - name: context
7
- dtype: string
8
- - name: response
9
- dtype: string
10
- - name: category
11
- dtype: string
12
- splits:
13
- - name: train
14
- num_bytes: 3243541
15
- num_examples: 4000
16
- download_size: 2050955
17
- dataset_size: 3243541
18
- configs:
19
- - config_name: default
20
- data_files:
21
- - split: train
22
- path: data/train-*
23
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: instruction
5
+ dtype: string
6
+ - name: context
7
+ dtype: string
8
+ - name: response
9
+ dtype: string
10
+ - name: category
11
+ dtype: string
12
+ splits:
13
+ - name: train
14
+ num_bytes: 3243541
15
+ num_examples: 4000
16
+ download_size: 2050955
17
+ dataset_size: 3243541
18
+ configs:
19
+ - config_name: default
20
+ data_files:
21
+ - split: train
22
+ path: data/train-*
23
+ license: apache-2.0
24
+ task_categories:
25
+ - table-question-answering
26
+ - question-answering
27
+ - text-generation
28
+ language:
29
+ - en
30
+ size_categories:
31
+ - 1K<n<10K
32
+ ---
33
+
34
+
35
+ # Databricks-Dolly-4k
36
+
37
+ The resulting dataset contains **4000 samples** of the [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset.
38
+
39
+ This split of an even smaller subset is provided for very fast experimentation and evaluation of models when computational resources are highly limited or for quick prototyping.
40
+
41
+ ## Dataset Structure
42
+
43
+ The dataset is provided as a `DatasetDict` with the following splits:
44
+
45
+ * **`train`**: Contains 4000 samples.
46
+
47
+ Each split contains the following features, identical to the original dataset:
48
+
49
+ * `id`: The unique identifier for each sample.
50
+ * `instruction`: The instruction or prompt for the task.
51
+ * `response`: The response to the given instruction.
52
+ * `context`: Additional context or information related to the instruction.
53
+ * `source`: The source of the sample.
54
+
55
+ ## Usage
56
+
57
+ You can easily load this split dataset using the `datasets` library:
58
+
59
+ ```python
60
+ from datasets import load_dataset
61
+
62
+ databricks_dolly_4k = load_dataset("Vishva007/Databricks-Dolly-4k")
63
+
64
+ print(databricks_dolly_4k)
65
+ print(databricks_dolly_4k["train"][0])
66
+ ```
67
+
68
+ ## Example Usage
69
+
70
+ Here’s an example of how you might use this dataset in a Python script:
71
+
72
+ ```python
73
+ from datasets import load_dataset
74
+
75
+ # Load the dataset
76
+ databricks_dolly_4k = load_dataset("Vishva007/Databricks-Dolly-4k")
77
+
78
+ # Print the first sample in the training set
79
+ print(databricks_dolly_4k["train"][0])
80
+
81
+ # Access specific fields from the first sample
82
+ sample = databricks_dolly_4k["train"][0]
83
+ print(f"ID: {sample['id']}")
84
+ print(f"Instruction: {sample['instruction']}")
85
+ print(f"Response: {sample['response']}")
86
+ print(f"Context: {sample['context']}")
87
+ print(f"Source: {sample['source']}")
88
+ ```
89
+
90
+ ## Dataset Info
91
+
92
+ ### Features
93
+
94
+ - `id`: The unique identifier for each sample.
95
+ - `instruction`: The instruction or prompt for the task.
96
+ - `response`: The response to the given instruction.
97
+ - `context`: Additional context or information related to the instruction.
98
+ - `source`: The source of the sample.
99
+
100
+ ### Splits
101
+
102
+ - **`train`**: Contains 4000 samples.
103
+
104
+ ### Metadata
105
+
106
+ - **Download Size**: 1310809668 bytes
107
+ - **Dataset Size**: 1323148760.0 bytes
108
+
109
+ ## License
110
+
111
+ This dataset is derived from the [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset, which is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
112
+
113
+ For more details about the original dataset, please refer to the [official documentation](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
114
+
115
+ ---