You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MathMentorDB: A Large-Scale Corpus of Mathematics Tutoring Dialogues

Dataset Description

MathMentorDB is a large-scale corpus of authentic mathematics tutoring dialogues derived from an online Mathematics Discord server. The dataset comprises 5.5 million messages across 200,332 conversations involving 43,249 unique users, capturing real-time student-tutor interactions at unprecedented scale.

This dataset addresses the gap between traditional small-scale educational research and the needs of modern NLP applications by providing authentic, naturally-occurring tutoring conversations from a diverse, global participant pool.

Key Features

  • 5.5 million messages across 200,000+ conversations
  • 43,249 unique users (students and volunteer tutors)
  • Fully pseudonymized to protect user privacy
  • Rich metadata: conversation structure, user roles, timestamps, channel information
  • Multi-party dialogues showing collaborative problem-solving
  • High-quality conversation disentanglement (Cohen's kappa = 0.98 inter-annotator agreement)
  • Diverse mathematical topics: from basic arithmetic through university-level mathematics
  • Authentic help-seeking behavior: captures real student questions and tutoring strategies

Dataset Statistics

Metric Value
Total messages 5,452,947
Conversations 200,332
Unique users 43,249
Help channels 32
Average messages per conversation ~27
File size (compressed) 235 MB

Supported Tasks

  • Question classification: Categorize student questions by cognitive level (procedural, conceptual, exploratory)
  • Dialogue analysis: Study authentic educational conversations and discourse patterns
  • Educational discourse analysis: Analyze teaching and learning interactions
  • Tutoring strategy detection: Identify effective tutoring moves and pedagogical techniques
  • Help-seeking behavior analysis: Understand how students formulate questions
  • Response quality assessment: Evaluate tutor responses for effectiveness
  • Conversation flow modeling: Study multi-turn educational dialogues
  • Educational chatbot development: Train and evaluate tutoring systems
  • Sentiment analysis: Analyze emotional aspects of learning interactions
  • Topic modeling: Identify common mathematical topics and difficulty patterns

Dataset Structure

Data Instances

Each row represents a single message within a conversation. Messages are ordered chronologically and grouped by conversation ID.

Example instance:

{
  'conversation_id': 1,
  'help_channel': 'help-10',
  '__rowid__': '64b250c38b084d7cb8ca65842aee515a',
  'author_id': '377426871932944386',
  'author_name': 'Christopher Gonzalez',
  'text': 'Can someone help me understand limits?',
  'timestamp': '2023-01-15T14:23:01',
  'isStudent': True,
  'isHelper': False,
  'isBot': False
}

Data Fields

Field Type Description
conversation_id int64 Unique identifier for each conversation thread (1 to 200,332)
help_channel string Discord channel where conversation occurred (e.g., "help-10")
__rowid__ string Unique message identifier (hash)
author_id string Pseudonymized user ID (consistent across messages)
author_name string Pseudonymized author name (fake name generated for anonymity)
text string Message content (may include LaTeX, code, URLs)
timestamp string ISO 8601 timestamp of when message was sent
isStudent boolean True if author is seeking help in this conversation
isHelper boolean True if author is providing help in this conversation
isBot boolean True if message is from an automated bot

Note on roles: Users can be both students and helpers in different conversations. Role labels are conversation-specific.

Data Splits

Split Messages Conversations
train 5,452,947 200,332

Currently released as a single split. Researchers should create their own train/validation/test splits based on conversation IDs to avoid data leakage (do not split within conversations).

Recommended split strategy:

# Split by conversation_id to avoid leakage
unique_conv_ids = dataset['train'].unique('conversation_id')
train_ids = unique_conv_ids[:160000]  # 80%
val_ids = unique_conv_ids[160000:180000]  # 10%
test_ids = unique_conv_ids[180000:]  # 10%

Dataset Size

  • Downloaded files: 235 MB (parquet format)
  • Uncompressed: ~235 MB
  • Number of rows: 5,452,947
  • Number of conversations: 200,332
  • Number of unique users: 43,249

Dataset Creation

Source Data

Initial Data Collection

Data was collected from the Mathematics Discord Server, a public online community where students from around the world seek help with mathematics problems and volunteer tutors provide assistance. The server includes dedicated help channels where students post questions and receive responses from peer tutors.

Mathematics topics covered:

  • Arithmetic and pre-algebra
  • Algebra (I, II, linear, abstract)
  • Geometry and trigonometry
  • Calculus (I, II, III, multivariable)
  • Differential equations
  • Linear algebra
  • Probability and statistics
  • Discrete mathematics
  • Number theory
  • Real analysis
  • Abstract algebra
  • And more advanced topics

Data Collection Process

  1. Channel selection: Only public help channels included (32 channels total)
  2. Message extraction: All messages from selected channels collected
  3. Format: Raw Discord message data (JSON)

Curation Process

Conversation Disentanglement

The primary methodological contribution is high-quality conversation disentanglement:

  1. Thread detection: Used Discord's reply structure and temporal analysis
  2. Manual annotation: Sample of 500+ conversations annotated by domain experts
  3. Inter-annotator agreement: Cohen's kappa = 0.98 (near-perfect agreement)
  4. Validation: Multiple annotators verified conversation boundaries

This disentanglement quality (κ=0.98) is significantly higher than typical dialogue datasets and was achieved without machine learning, using careful rule-based methods and human validation.

Pseudonymization

All personally identifiable information was removed or replaced:

  • Usernames: Replaced with consistent fake names (e.g., "Christopher Gonzalez")
  • User IDs: Hashed to pseudonymous IDs (consistent per user)
  • External links: Preserved (educational resources, homework platforms)
  • Uploaded images: Not included in this text-only version
  • Mentions: Replaced with pseudonymous names

The pseudonymization maintains conversation coherence (same user = same pseudonym) while protecting privacy.

Role Labeling

Users labeled as students, helpers, or bots based on:

  • Conversation context: Who initiates vs. responds
  • Message patterns: Question-asking vs. answer-providing
  • User behavior: Historical patterns across conversations

Important: Roles are conversation-specific. A user can be a student in one conversation and a helper in another.

Quality Filtering

Removed:

  • Bot commands and administrative messages (flagged with isBot=True)
  • Off-topic discussions
  • Meta-conversations about server rules
  • Duplicate or malformed messages

Preserved:

  • All mathematical content
  • Social pleasantries (authenticity)
  • Informal language (realistic interactions)

Who are the source language producers?

Students

  • Demographics: Global participant pool, primarily ages 13-25
  • Education level: High school through university undergraduates
  • Language: Native and non-native English speakers
  • Motivation: Seeking homework help, exam preparation, conceptual understanding

Tutors

  • Demographics: Volunteer peer mentors with mathematics expertise
  • Education level: Advanced high school students through graduate students
  • Motivation: Helping others, practicing teaching, community contribution

Language Characteristics

  • Register: Informal, conversational
  • Medium: Text-based chat (Discord)
  • Features: Abbreviations, emoji, LaTeX math notation, code snippets
  • Multilingual: Primarily English with occasional non-English mathematical terminology

Additional Information

Privacy and Ethics

Privacy Considerations

  • Public data: All data collected from publicly accessible channels where users expect visibility
  • Pseudonymization: All usernames and user IDs replaced with consistent pseudonyms
  • No PII: No email addresses, phone numbers, or other personal identifiers
  • No private messages: Only public channel conversations included
  • Terms of Service: Data collection compliant with Discord Terms of Service
  • Community consent: Server moderators aware of and supportive of research use

Ethical Considerations

  • Educational benefit: Dataset intended to improve educational technology and tutoring systems
  • No re-identification: Researchers must not attempt to de-anonymize users
  • Responsible use: Follow ethical guidelines for educational data research
  • Student vulnerability: Be mindful that students may be struggling or frustrated
  • No harm: Use should not negatively impact the Discord community or students
  • Attribution: Acknowledge the community's contribution to education research

Limitations and Biases

Selection bias:

  • Users who seek help on Discord may differ from general student population
  • Self-selection: motivated students who actively seek help
  • Access bias: requires internet access and familiarity with Discord

Platform bias:

  • Text-only medium (no voice, video, or whiteboard)
  • Asynchronous and synchronous mixing
  • Global but English-dominant community

Content bias:

  • More homework help than conceptual discussions
  • Possible over-representation of certain math topics
  • Helper quality varies (peer tutors, not professional educators)

Temporal bias:

  • Activity patterns correlate with academic calendar
  • More activity during exam periods
  • Timezone distribution reflects global participation

Dataset Version and Maintenance

Current version: 1.0 Release date: January 2025 Status: Static release (no ongoing updates planned)

Potential future versions:

  • Additional metadata (topic labels, question categories)
  • Adjudicated labels from multiple annotators
  • Additional time periods
  • Image/attachment metadata

Licensing Information

This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

You are free to:

  • Share: Copy and redistribute the material in any medium or format
  • Adapt: Remix, transform, and build upon the material for any purpose, including commercially

Under the following terms:

  • Attribution: You must give appropriate credit, provide a link to the license, and indicate if changes were made
  • No additional restrictions: You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits

See the full license text for details.

Usage Examples

Loading the Dataset

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("mikeion/mathconverse_pseudonyms")

# Access the data
print(f"Total messages: {len(dataset['train'])}")
print(f"Columns: {dataset['train'].column_names}")

# View first example
print(dataset['train'][0])

Basic Filtering

# Get all messages from a specific conversation
conv_id = 1
conversation = dataset['train'].filter(
    lambda x: x['conversation_id'] == conv_id
)

# Get only student messages
student_messages = dataset['train'].filter(
    lambda x: x['isStudent'] == True
)

# Get messages from a specific channel
channel_messages = dataset['train'].filter(
    lambda x: x['help_channel'] == 'help-10'
)

Conversation Analysis

import pandas as pd

# Convert to pandas for analysis
df = dataset['train'].to_pandas()

# Analyze conversation lengths
conv_lengths = df.groupby('conversation_id').size()
print(f"Average messages per conversation: {conv_lengths.mean():.2f}")
print(f"Median messages per conversation: {conv_lengths.median():.2f}")

# Find most active users
user_activity = df['author_name'].value_counts()
print(f"Most active users:\n{user_activity.head(10)}")

# Analyze student vs helper messages
role_dist = df[['isStudent', 'isHelper']].sum()
print(f"Student messages: {role_dist['isStudent']}")
print(f"Helper messages: {role_dist['isHelper']}")

Creating Train/Validation/Test Splits

# Split by conversation to avoid leakage
conv_ids = df['conversation_id'].unique()
n_conv = len(conv_ids)

# Shuffle and split
import numpy as np
np.random.shuffle(conv_ids)

train_convs = conv_ids[:int(0.8*n_conv)]
val_convs = conv_ids[int(0.8*n_conv):int(0.9*n_conv)]
test_convs = conv_ids[int(0.9*n_conv):]

# Create splits
train_df = df[df['conversation_id'].isin(train_convs)]
val_df = df[df['conversation_id'].isin(val_convs)]
test_df = df[df['conversation_id'].isin(test_convs)]

Question Extraction

# Extract student questions (first student message per conversation)
questions = df[df['isStudent'] == True].groupby('conversation_id').first()
print(f"Total questions: {len(questions)}")

# Analyze question length
questions['text_length'] = questions['text'].str.len()
print(f"Average question length: {questions['text_length'].mean():.2f} characters")

Baseline Results

Preliminary experiments demonstrate the dataset's utility for question classification tasks:

Question Classification Experiment

Task: Classify student questions into 6 categories based on Graesser & Person (1994):

  • Basic Inquiry
  • Procedural Reasoning
  • Causal Reasoning
  • Exploratory Inquiry
  • Contextual Inquiry
  • Assertive Communication

Models evaluated: 5 state-of-the-art LLMs across providers and cost tiers

  • GPT-3.5-turbo (baseline, budget)
  • GPT-4o-mini (efficient)
  • GPT-4o (flagship OpenAI)
  • Claude 3.5 Sonnet
  • Claude 4.5 Sonnet (best performance)

Results: Macro F1 scores ranging from 0.44 to 0.83

Baselines:

  • Zero-R (majority class): F1 = 0.09
  • TF-IDF + Logistic Regression: F1 = 0.22

The strong performance of LLMs (substantially outperforming traditional baselines) demonstrates the dataset enables robust NLP experimentation, while the range of results highlights opportunities for improvement and specialized model development.

Citation

If you use this dataset in your research, please cite:

@misc{ion_mathmentordb_2025,
  author = {Ion, Michael},
  title = {MathMentorDB: A Large-Scale Corpus of Mathematics Tutoring Dialogues},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/mikeion/mathconverse_pseudonyms},
  doi = {10.57967/hf/6805}
}

Associated conference paper: Forthcoming at LREC-COLING 2026

Related Datasets

  • MathDial: Math word problem dialogues (smaller scale, synthetic)
  • NCTE: Natural language tutoring dataset (different domain)
  • MathQA: Math question-answering (no dialogue context)
  • ASSISTments: Educational data (different format, structured)

MathMentorDB uniquely combines large scale, authentic dialogue, and educational context.

Contact and Contributions

Contact

For questions, issues, or collaboration inquiries:

  • Hugging Face: @mikeion
  • Dataset issues: Open an issue on this repository

Contributions

We welcome:

  • Bug reports for data quality issues
  • Suggestions for additional metadata or annotations
  • Research collaborations using the dataset
  • Citations and mentions of work using MathMentorDB

Acknowledgments

We thank the Mathematics Discord Server community and its moderators for creating this valuable educational resource and supporting research use of public channel data. We are grateful to the thousands of volunteer tutors who freely share their mathematical expertise to help students worldwide.


Dataset Version: 1.0 Last Updated: January 2025 DOI: 10.57967/hf/6805

Downloads last month
13