Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

You agree to abide by all terms of the Review-5K Dataset License, including proper attribution, restrictions on redistribution, commercial use, and prohibition of use in real-world peer review systems. You also commit to using the dataset ethically and responsibly, refraining from any unlawful or harmful applications, and you understand the limitations and potential biases of the data.

Log in or Sign Up to review the conditions and access this dataset content.

Review-5K: A Dataset for Peer Review Analysis

HomePage: https://wengsyx.github.io/Researcher/

Review-5K Dataset

The Review-5K dataset is a collection of peer reviews and associated metadata from the ICLR 2024 conference. It is designed to facilitate research on the analysis of the peer review process itself, not to be used for automating or replacing human review in real-world settings.

The dataset is constructed by:

  1. Data Collection: Gathering paper information (title, abstract, PDF) and corresponding review comments from ICLR 2024 via OpenReview.
  2. Data Retrieval: Attempting to retrieve permitted LaTeX files from ArXiv. If unavailable, using MagicDoc to convert PDFs to markdown.
  3. Data Structuring: Organizing each data point to reflect the peer review process, including:
    • Summary of the work
    • Identified strengths and weaknesses
    • Questions for clarification
    • Numerical scores (soundness, presentation, contribution, overall rating)
    • Meta-review information (if available)
  4. Data Filtering: Filtering out incomplete or blank data points.
  5. Data Split: The dataset is splited into training/testing sets.

The dataset contains 4,991 papers and over 16,000 reviewer comments.

Example

{
  "id": "LEYUkvdUhq",               // Unique identifier (likely from OpenReview)
  "title": "ZipIt! Merging Models from Different Tasks without Training",
  "decision": "## Paper Decision\n\n\nAccept (poster)",
  "review_contents": [             // List of reviews
    {
      "content": "### Reviewer\\n\\n### Summary\\n\\nThe paper..."
    },
   {
      "content": "### Reviewer\\n\\n### Summary\\n\\nThe paper..."
    },
  ],
  "messages": [                   // List of messages
    {
       "content": "You are an expert academic reviewer tasked with providing a thorough and balanced evaluation of research papers...",
        "role": "system"
    },
    {
       "content": "Title: ZipIt! Merging Models from Different Tasks without Training\n\nABSTRACT\n...",
        "role": "user"
    },
    {
       "content": "## Reviewer\n\n### Summary\n\nThis paper proposes a novel method for merging two different models...",
        "role": "assistant"
    },
  ],
  "rates": [6, 5, 6, 6],            // Example numerical ratings (details may vary)
}
  • id: A unique identifier for the paper and its associated reviews (the OpenReview ID).
  • title: The title of the reviewed paper.
  • decision: Paper decision.
  • review_contents: A list of individual reviews. Each review is a dictionary with (at minimum) a content field containing the text of the review. It may also include structured fields like "Soundness," "Presentation," etc., if extracted.
  • messages: The training dataset.
  • rates: A list of numerical scores provided by the reviewers. The exact meaning and order of these scores should be documented (e.g., [Soundness, Presentation, Contribution, Overall]).

Using Review-5K

You can easily download and use the Review-5k dataset with Hugging Face's datasets library.

from datasets import load_dataset

dataset = load_dataset("WestlakeNLP/Review-5K")
print(dataset)

Alternatively, stream the dataset:

from datasets import load_dataset

dataset = load_dataset("WestlakeNLP/Review-5K", streaming=True)
print(dataset)
print(next(iter(dataset['train'])))

Model Specifications

Model Name Pre-training Language Model HF Link MS Link
CycleReviewer-ML-Llama3.1-8B Llama3.1-8B-Instruct 🤗 link 🤖 TODO
CycleReviewer-ML-Llama3.1-70B Llama3.1-70B-Instruct 🤗 link 🤖 TODO
CycleReviewer-ML-123B Mistral-Large-2 🤗 link 🤖 TODO

The CycleReviewer model is trained in Review-5K.

CITE

@inproceedings{
weng2025cycleresearcher,
title={CycleResearcher: Improving Automated Research via Automated Review},
author={Yixuan Weng and Minjun Zhu and Guangsheng Bao and Hongbo Zhang and Jindong Wang and Yue Zhang and Linyi Yang},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=bjcsVLoHYs}
}

Open Source License

The code in this repository is open-sourced under the Apache-2.0 license. The datasets are open-sourced under the Review-5K Dataset License.

Downloads last month
59