mkurman commited on
Commit
e9c74ee
·
verified ·
1 Parent(s): 1b4b949

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md CHANGED
@@ -59,4 +59,73 @@ configs:
59
  path: data/train-*
60
  - split: test
61
  path: data/test-*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  path: data/train-*
60
  - split: test
61
  path: data/test-*
62
+ dataset_name: mkurman/medmcqa-hard
63
+ license: cc
64
+ language:
65
+ - en
66
+ task_categories:
67
+ - multiple-choice
68
+ - question-answering
69
+ - reinforcement-learning
70
+ tags:
71
+ - medical
72
+ - MCQ
73
+ - evaluation
74
+ - SFT
75
+ - DPO
76
+ - RL
77
+ pretty_name: MedMCQA-Hard
78
+ size_categories:
79
+ - 10k<n<1M
80
  ---
81
+
82
+ # medmcqa-hard
83
+
84
+ **A harder, de-duplicated remix of MedMCQA** designed to reduce memorization and strengthen medical MCQ generalization.
85
+
86
+ ## Why “hard”?
87
+
88
+ * **Answer list variants:** Each correct option appears in **multiple phrasing/list variants** (e.g., reordered enumerations, equivalent wording), so models can’t rely on surface-form recall and must reason over content.
89
+ * **RL-friendly targets:** Every item includes **one canonical correct answer** and both **single** and **set** of incorrect answers → plug-and-play for **DPO**, **RLAIF/GRPO**, and contrastive objectives.
90
+ * **Chat formatting:** Adds lightweight **`messages`** (and optional `system_prompt`) not present in the original dataset, making it convenient for instruction-tuned models and SFT.
91
+
92
+ ## Intended uses
93
+
94
+ * Robust **eval** of medical QA beyond memorization.
95
+ * **SFT** with chat-style prompts.
96
+ * **DPO / other RL** setups using `single_incorrect_answer` or `incorrect_answers`.
97
+
98
+ ## Data schema (fields)
99
+
100
+ * `question`: str
101
+ * `options`: list[str] (usually 4)
102
+ * `letter`: str (A/B/C/D)
103
+ * `cop`: int (0-based index of correct option)
104
+ * `incorrect_answers`: list[str]
105
+ * `single_incorrect_answer`: str
106
+ * `messages`: list[{role: "system"|"user"|"assistant", content: str}]
107
+ * `system_prompt`: str (optional)
108
+
109
+ ### Example
110
+
111
+ ```json
112
+ {
113
+ "question": "Which of the following is true about …?",
114
+ "options": ["A …", "B …", "C …", "D …"],
115
+ "letter": "C",
116
+ "cop": 2,
117
+ "incorrect_answers": ["A …", "B …", "D …"],
118
+ "single_incorrect_answer": "B …",
119
+ "messages": [
120
+ {"role":"system","content":"You are a medical tutor."},
121
+ {"role":"user","content":"Q: Which of the following…?\nA) …\nB) …\nC) …\nD) …"}
122
+ ]
123
+ }
124
+ ```
125
+
126
+ ## Source & attribution
127
+
128
+ Derived from **MedMCQA** (Pal, Umapathi, Sankarasubbu; CHIL 2022). Please cite the original dataset/paper when using this work.
129
+
130
+ > **Safety note:** Research/education only. Not for clinical use.
131
+