Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -38,13 +38,16 @@ This dataset contains two distinct subsets specifically designed for RAG applica
|
|
| 38 |
|
| 39 |
This structure makes it ideal for building and evaluating RAG systems that retrieve relevant biomedical information from a corpus and generate accurate, evidence-based answers to complex biomedical questions.
|
| 40 |
|
|
|
|
|
|
|
| 41 |
## Dataset Structure
|
| 42 |
|
| 43 |
The dataset contains three main components:
|
| 44 |
|
| 45 |
1. **Corpus** (`data/corpus.jsonl`): A collection of PubMed abstracts including metadata.
|
| 46 |
|
| 47 |
-
-
|
|
|
|
| 48 |
- `id`: PubMed ID
|
| 49 |
- `title`: Title of the paper
|
| 50 |
- `text`: Abstract text
|
|
@@ -58,7 +61,8 @@ The dataset contains three main components:
|
|
| 58 |
|
| 59 |
2. **Dev Questions** (`data/dev.jsonl`): Development set of biomedical questions.
|
| 60 |
|
| 61 |
-
-
|
|
|
|
| 62 |
- `question_id`: Unique identifier for the question
|
| 63 |
- `question`: The question text
|
| 64 |
- `answer`: Ideal answer
|
|
@@ -67,7 +71,7 @@ The dataset contains three main components:
|
|
| 67 |
- `snippets`: Relevant snippets from abstracts
|
| 68 |
|
| 69 |
3. **Eval Questions** (`data/eval.jsonl`): Eval set of biomedical questions.
|
| 70 |
-
- Same structure as dev questions
|
| 71 |
|
| 72 |
## Usage
|
| 73 |
|
|
@@ -75,22 +79,25 @@ This dataset is designed for training and evaluating RAG systems for biomedical
|
|
| 75 |
|
| 76 |
### Loading the Dataset
|
| 77 |
|
| 78 |
-
You can load the dataset using the Hugging Face `datasets` library
|
| 79 |
|
| 80 |
```python
|
| 81 |
from datasets import load_dataset
|
| 82 |
|
| 83 |
-
# Load the
|
| 84 |
-
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
-
# Access the corpus
|
| 87 |
-
|
| 88 |
|
| 89 |
# Access the development questions
|
| 90 |
-
dev_questions =
|
| 91 |
|
| 92 |
# Access the eval questions
|
| 93 |
-
eval_questions =
|
| 94 |
```
|
| 95 |
|
| 96 |
### Example RAG Application
|
|
|
|
| 38 |
|
| 39 |
This structure makes it ideal for building and evaluating RAG systems that retrieve relevant biomedical information from a corpus and generate accurate, evidence-based answers to complex biomedical questions.
|
| 40 |
|
| 41 |
+
The code to generate this dataset is here: https://github.com/MattMorgis/bioasq-rag
|
| 42 |
+
|
| 43 |
## Dataset Structure
|
| 44 |
|
| 45 |
The dataset contains three main components:
|
| 46 |
|
| 47 |
1. **Corpus** (`data/corpus.jsonl`): A collection of PubMed abstracts including metadata.
|
| 48 |
|
| 49 |
+
- The corpus is accessible through the "train" split of the "text-corpus" config
|
| 50 |
+
- Each document contains:
|
| 51 |
- `id`: PubMed ID
|
| 52 |
- `title`: Title of the paper
|
| 53 |
- `text`: Abstract text
|
|
|
|
| 61 |
|
| 62 |
2. **Dev Questions** (`data/dev.jsonl`): Development set of biomedical questions.
|
| 63 |
|
| 64 |
+
- The dev questions are accessible through the "dev" split of the "question-answer-passages" config
|
| 65 |
+
- Each question contains:
|
| 66 |
- `question_id`: Unique identifier for the question
|
| 67 |
- `question`: The question text
|
| 68 |
- `answer`: Ideal answer
|
|
|
|
| 71 |
- `snippets`: Relevant snippets from abstracts
|
| 72 |
|
| 73 |
3. **Eval Questions** (`data/eval.jsonl`): Eval set of biomedical questions.
|
| 74 |
+
- Same structure as dev questions, accessible through the "eval" split
|
| 75 |
|
| 76 |
## Usage
|
| 77 |
|
|
|
|
| 79 |
|
| 80 |
### Loading the Dataset
|
| 81 |
|
| 82 |
+
You can load the dataset using the Hugging Face `datasets` library. **Note that you must specify a config name**:
|
| 83 |
|
| 84 |
```python
|
| 85 |
from datasets import load_dataset
|
| 86 |
|
| 87 |
+
# Load the corpus of PubMed abstracts
|
| 88 |
+
corpus_dataset = load_dataset("mattmorgis/bioasq-12b-rag", "text-corpus")
|
| 89 |
+
|
| 90 |
+
# Load the question-answer dataset
|
| 91 |
+
questions_dataset = load_dataset("mattmorgis/bioasq-12b-rag", "question-answer-passages")
|
| 92 |
|
| 93 |
+
# Access the corpus data (note: the corpus is stored in the "train" split)
|
| 94 |
+
corpus_docs = corpus_dataset["train"]
|
| 95 |
|
| 96 |
# Access the development questions
|
| 97 |
+
dev_questions = questions_dataset["dev"]
|
| 98 |
|
| 99 |
# Access the eval questions
|
| 100 |
+
eval_questions = questions_dataset["eval"]
|
| 101 |
```
|
| 102 |
|
| 103 |
### Example RAG Application
|