Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringlengths
2
4
corpus-id
stringlengths
4
6
score
float64
1
1
q64
c64_1
1
q406
c406_4
1
q290
c290_0
1
q127
c127_2
1
q488
c488_2
1
q36
c36_0
1
q186
c186_0
1
q122
c122_1
1
q298
c298_0
1
q174
c174_3
1
q459
c459_2
1
q159
c159_1
1
q401
c401_2
1
q436
c436_3
1
q377
c377_0
1
q145
c145_2
1
q85
c85_0
1
q425
c425_4
1
q412
c412_2
1
q405
c405_2
1
q537
c537_3
1
q531
c459_1
1
q59
c59_1
1
q155
c155_0
1
q280
c280_0
1
q119
c119_0
1
q551
c551_0
1
q167
c167_1
1
q264
c264_2
1
q230
c230_2
1
q105
c105_3
1
q202
c202_0
1
q261
c261_4
1
q136
c136_2
1
q532
c532_0
1
q222
c222_2
1
q288
c288_1
1
q507
c507_1
1
q523
c523_1
1
q29
c29_0
1
q126
c425_3
1
q504
c504_3
1
q486
c486_0
1
q132
c377_2
1
q5
c5_4
1
q457
c457_0
1
q404
c404_1
1
q450
c450_0
1
q496
c496_0
1
q70
c70_0
1
q61
c61_2
1
q539
c539_3
1
q91
c91_1
1
q208
c208_1
1
q512
c512_0
1
q562
c562_3
1
q439
c439_0
1
q118
c118_4
1
q144
c144_0
1
q317
c317_0
1
q460
c460_0
1
q481
c481_2
1
q192
c192_2
1
q293
c293_4
1
q339
c339_3
1
q241
c208_0
1
q352
c352_1
1
q330
c330_2
1
q32
c32_0
1
q77
c77_0
1
q42
c42_4
1
q33
c33_1
1
q237
c237_1
1
q409
c409_0
1
q204
c204_1
1
q371
c371_4
1
q50
c50_1
1
q148
c148_1
1
q196
c196_3
1
q57
c57_4
1
q275
c275_1
1
q567
c567_3
1
q150
c150_1
1
q285
c285_4
1
q561
c561_4
1
q179
c179_1
1
q226
c226_4
1
q39
c39_1
1
q49
c49_2
1
q337
c337_0
1
q262
c262_1
1
q24
c24_3
1
q456
c456_4
1
q266
c266_0
1
q56
c56_2
1
q2
c2_4
1
q229
c229_3
1
q508
c508_3
1
q48
c48_2
1
q513
c179_0
1
End of preview. Expand in Data Studio

SCALR (MLEB version)

This is the version of the SCALR evaluation dataset used in the Massive Legal Embeddings Benchmark (MLEB) by Isaacus.

This dataset tests the ability of information retrieval models to retrieve relevant legal holdings to complex, reasoning-intensive legal questions.

Structure πŸ—‚οΈ

As per the MTEB information retrieval dataset format, this dataset comprises three splits, default, corpus and queries.

The default split pairs questions (query-id) with correct holdings (corpus-id), each pair having a score of 1.

The corpus split contains holdings, with the text of a holding being stored in the text key and its id being stored in the _id key. There is also a title column, which is deliberately set to an empty string in all cases for compatibility with the mteb library.

The queries split contains questions, with the text of a question being stored in the text key and its id being stored in the _id key.

Methodology πŸ§ͺ

To understand how SCALR itself was created, refer to its documentation.

This dataset was formatted by taking the test split of SCALR, splitting it in half (after random shuffling) (to allow for the remainder of the dataset to be used for validation), and treating questions as anchors and correct holdings as positive passages, and adding incorrect holdings to the global passage corpus.

License πŸ“œ

This dataset is licensed under CC BY 4.0.

Citation πŸ”–

If you use this dataset, please cite MLEB as well.

@misc{guha2023legalbench,
      title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models}, 
      author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher RΓ© and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
      year={2023},
      eprint={2308.11462},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

@misc{mleb-2025,
  title={Massive Legal Embedding Benchmark (MLEB)},
  author={Umar Butler and Abdur-Rahman Butler},
  year={2025},
  url={https://isaacus.com/blog/introducing-mleb},
  publisher={Isaacus}
}
Downloads last month
68