Commit
·
a0321f2
1
Parent(s):
e597249
adds the dataset usage section to README.md
Browse files- README.md +60 -0
- pubchemqc-pm6.py +0 -0
README.md
CHANGED
@@ -69,6 +69,9 @@ configs:
|
|
69 |
- [Data Instances](#data-instances)
|
70 |
- [Data Fields](#data-fields)
|
71 |
- [Data Splits and Configurations](#data-splits-and-configurations)
|
|
|
|
|
|
|
72 |
- [Dataset Creation](#dataset-creation)
|
73 |
- [Curation Rationale](#curation-rationale)
|
74 |
- [Source Data](#source-data)
|
@@ -219,6 +222,63 @@ has seven configurations/subsets:
|
|
219 |
- `pm6opt_chnopsfcl500nosalt`
|
220 |
- `pm6opt_chnopsfclnakmgca500`
|
221 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
222 |
## Dataset Creation
|
223 |
|
224 |
### Curation Rationale
|
|
|
69 |
- [Data Instances](#data-instances)
|
70 |
- [Data Fields](#data-fields)
|
71 |
- [Data Splits and Configurations](#data-splits-and-configurations)
|
72 |
+
- [How to Use the Dataset](#how-to-use-the-dataset)
|
73 |
+
- [Prerequisites](#prerequisites)
|
74 |
+
- [Accessing the Data](#accessing-the-data)
|
75 |
- [Dataset Creation](#dataset-creation)
|
76 |
- [Curation Rationale](#curation-rationale)
|
77 |
- [Source Data](#source-data)
|
|
|
222 |
- `pm6opt_chnopsfcl500nosalt`
|
223 |
- `pm6opt_chnopsfclnakmgca500`
|
224 |
|
225 |
+
## How to Use the Dataset
|
226 |
+
|
227 |
+
### Prerequisites
|
228 |
+
|
229 |
+
We recommend isolating your work in a virtualenv or conda environment.
|
230 |
+
You can create a new conda environment, `pubchemqc`,
|
231 |
+
|
232 |
+
```bash
|
233 |
+
conda create -n pubchemqc python=3.12
|
234 |
+
```
|
235 |
+
|
236 |
+
and activate it using the following command
|
237 |
+
|
238 |
+
```bash
|
239 |
+
conda activate pubchemqc
|
240 |
+
```
|
241 |
+
|
242 |
+
Once the conda environment is activated, you can
|
243 |
+
install the dependencies in it as shown below
|
244 |
+
|
245 |
+
```bash
|
246 |
+
pip install huggingface_hub ijson
|
247 |
+
```
|
248 |
+
|
249 |
+
### Accessing the Data
|
250 |
+
|
251 |
+
Once the required packages are installed, you can run the following code
|
252 |
+
to access the data
|
253 |
+
|
254 |
+
```python
|
255 |
+
# import the modules
|
256 |
+
from datasets import load_dataset
|
257 |
+
|
258 |
+
# load the dataset with streaming
|
259 |
+
hub_ds = load_dataset(path="molssiai-hub/pubchemqc-pm6",
|
260 |
+
name="pm6opt",
|
261 |
+
split="train",
|
262 |
+
streaming=True,
|
263 |
+
cache_dir="./tmp",
|
264 |
+
trust_remote_code=True)
|
265 |
+
|
266 |
+
# fetch a batch of 32 samples from the dataset
|
267 |
+
ds = list(hub_ds.take(32))
|
268 |
+
```
|
269 |
+
|
270 |
+
The argument `name` by default is set to `pm6opt` which refers to
|
271 |
+
the entire dataset. Other configurations (subsets), listed in
|
272 |
+
Sec. [Data Splits and Configurations](#data-splits-and-configurations),
|
273 |
+
can also be selected.
|
274 |
+
|
275 |
+
The `split` must be set to `train` as it is the only split in our dataset.
|
276 |
+
We recommend using `streaming=True` to avoid downloading the entire dataset
|
277 |
+
on disk. The `cache_dir` allows us to store the Hugging Face datasets' and
|
278 |
+
models' artifacts in a non-default directory (by default, it is set to
|
279 |
+
`~/.cache/huggingface`). As we are using a custom
|
280 |
+
[load script](https://huggingface.co/datasets/molssiai-hub/pubchemqc-pm6/blob/main/pubchemqc-pm6.py), the `trust_remote_code` argument should also be set to `True`.
|
281 |
+
|
282 |
## Dataset Creation
|
283 |
|
284 |
### Curation Rationale
|
pubchemqc-pm6.py
CHANGED
The diff for this file is too large to render.
See raw diff
|
|