input_ids
sequencelengths 4
246
| semantic_embeddings
sequencelengths 4
246
| semantic_positions
sequencelengths 4
246
| attention_mask
sequencelengths 4
246
| file_path
stringclasses 59
values | chunk_info
stringlengths 10
15
| num_symbols
int64 0
200
|
---|---|---|---|---|---|---|
[2,847,3288,25,2138,19892,44181,28799,82,198,1499,39022,1159,19360,11,8115,4393,1406,1040,716,3027,3(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\_streambase.py | lines_0_47 | 97 |
[2,847,3288,25,2138,19892,44181,28799,82,198,7129,7082,82,5435,311,7834,18922,892,64436,1541,1159,78(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,1,1(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\_compile.py | lines_0_35 | 45 |
[286,671,50678,10431,1075,64677,18015,814,85608,20635,16514,5608,568,576,12942,198,286,671,1633,2429(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
1
] | [
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1,
1
] | C:\Python311\Lib\site-packages\torch\_compile.py | lines_35_39 | 1 |
[2,847,3288,25,2138,19892,44181,28799,82,198,12599,72111,4629,18594,311,14693,5746,15075,474,7834,14(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,1,1,1,1,1,0(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\_storage_docs.py | lines_0_33 | 40 |
[55271,9902,2141,13874,5820,568,1416,9902,6100,13874,374,9902,2514,13874,279,1034,686,387,3465,421,4(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\_storage_docs.py | lines_33_43 | 0 |
[474,24085,271,474,7834,198,1499,7834,8410,1436,3288,9344,1159,4161,40291,9344,5084,11,28871,1592,14(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,1,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\return_types.py | lines_0_44 | 122 |
[262,671,11201,4297,304,7834,21458,9763,374,264,2036,13262,11,37141,264,330,30245,24590,34294,4803,1(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\return_types.py | lines_44_52 | 7 |
[474,18993,198,1499,19496,1159,12256,1406,2,34979,25,13655,419,1034,304,12811,448,70717,304,518,268,(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,0,0,1,1,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\nn\_reduction.py | lines_0_36 | 66 |
[262,9958,284,330,2141,38594,323,7949,2827,686,387,31590,11,4486,990,13951,1131,6257,6,4518,2217,262(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,1,1,0,0,0,0,0,1,1,1,0,0,1,0,0,1,1,1,0,0,0(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\nn\_reduction.py | lines_36_61 | 56 |
[2,847,3288,25,2138,19892,44181,28799,82,198,474,4494,271,474,7834,1436,34,1406,1040,716,1957,22699,(...TRUNCATED) | [[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.(...TRUNCATED) | [0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1,0,1,1,1,1,1,0,1,1,0,0,0,0,0,0,1,1,1(...TRUNCATED) | [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1(...TRUNCATED) | C:\Python311\Lib\site-packages\torch\_classes.py | lines_0_39 | 84 |
PyTorch Semantic Code Dataset
A semantically-enriched Python code dataset combining syntactic tokenization with deep semantic analysis from Language Server Protocol (LSP) tools.
π― Overview
This dataset enhances tokenized Python code with semantic embeddings derived from static analysis tools (Tree-sitter + Jedi), providing models with both syntactic and semantic understanding of code symbols. Each token in the code is aligned with rich semantic information including type hints, definitions, documentation, and cross-references.
Key Features:
- π€ Tokenized Python Code: Using Qwen3-0.6B tokenizer
- π§ Semantic Embeddings: 1024D vectors from Qwen3-Embedding-0.6B
- π Symbol Analysis: Type information, definitions, and cross-references via Jedi
- π Precise Alignment: Token-level mapping between syntax and semantics
- ποΈ Production Code: Real PyTorch codebase for authentic patterns
π Dataset Statistics
Metric | Value |
---|---|
Total Sequences | 1,540 |
Training Samples | 1,232 |
Evaluation Samples | 308 |
Average Sequence Length | ~200 tokens (256 max length) |
Semantic Coverage | ~35% of tokens have semantic information |
Embedding Dimension | 1024 |
Source Code | PyTorch codebase |
ποΈ Dataset Structure
Each sample contains:
{
"input_ids": [2, 847, 3288, ...], # Tokenized code (Qwen3-0.6B)
"semantic_embeddings": [[0.1, -0.2, ...], ...], # 1024D embeddings per token
"semantic_positions": [0, 0, 1, 1, 0, ...], # Binary mask (1=has semantic info)
"attention_mask": [1, 1, 1, 1, 1, ...], # Standard attention mask
"file_path": "torch/nn/modules/linear.py", # Source file
"chunk_info": "lines_45_120", # Code chunk information
"num_symbols": 23 # Number of semantic symbols
}
Field Descriptions
input_ids
: Token IDs from Qwen3-0.6B tokenizersemantic_embeddings
: One 1024D vector per token, containing semantic information for symbol tokens or zeros for non-symbolssemantic_positions
: Binary mask indicating which tokens have meaningful semantic embeddingsattention_mask
: Standard attention mask for the sequencefile_path
: Path to the original Python filechunk_info
: Information about which part of the file this sequence representsnum_symbols
: Count of tokens that received semantic enrichment
π¬ Semantic Information
The semantic embeddings encode rich information extracted via Jedi analysis:
What's Embedded
- Type Information:
Type[ABC]
,(self, event) -> None
- Definitions: Function signatures, class definitions
- Documentation: Docstrings and comments
- Cross-References: Where symbols are defined
- Import Resolution: Module and package information
- Scope Analysis: Variable and function scope
Example Semantic Descriptions
# For token "_StreamBase"
"name: _StreamBase. kind: class_def. type: Type[_StreamBase].
definition: class _StreamBase. description: Base stream class abstraction."
π Quick Start
Loading the Dataset
from datasets import load_dataset
from transformers import AutoTokenizer
# Load dataset
dataset = load_dataset("ant-des/pytorch-semantic-dataset-fixed")
# Load corresponding tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
# Access splits
train_dataset = dataset["train"]
eval_dataset = dataset["test"]
print(f"Training samples: {len(train_dataset)}")
print(f"Evaluation samples: {len(eval_dataset)}")
Inspecting Samples
# Get a sample
sample = train_dataset[0]
# Reconstruct the code
code = tokenizer.decode(sample["input_ids"], skip_special_tokens=True)
print("Code:", code[:200] + "...")
# Check semantic coverage
semantic_tokens = sum(sample["semantic_positions"])
total_tokens = len(sample["semantic_positions"])
coverage = semantic_tokens / total_tokens * 100
print(f"Semantic coverage: {coverage:.1f}%")
# Find semantic tokens
for i, (token_id, has_semantic) in enumerate(zip(sample["input_ids"], sample["semantic_positions"])):
if has_semantic:
token_text = tokenizer.decode([token_id])
print(f"Semantic token at position {i}: '{token_text}'")
π― Use Cases
1. Semantic Code Completion
Train language models that understand code semantics for better completions:
# Model sees both syntax and semantics
input_ids = [class_token, identifier_token]
semantic_info = [zero_embedding, class_definition_embedding]
# β Better understanding of class structure
2. Code Understanding Tasks
- Variable Type Inference: Using semantic type information
- Function Signature Prediction: Leveraging parameter and return type data
- Import Resolution: Understanding cross-module dependencies
- Refactoring Assistance: Knowing symbol definitions and usages
3. Multi-Modal Code Models
Combine syntactic and semantic representations:
class SemanticCodeModel(nn.Module):
def forward(self, input_ids, semantic_embeddings, semantic_positions):
# Process both streams
syntactic_repr = self.language_model(input_ids)
semantic_repr = self.semantic_projection(semantic_embeddings)
# Cross-attention fusion
enhanced_repr = self.cross_attention(
syntactic_repr, semantic_repr, semantic_positions
)
return enhanced_repr
π§ Creation Methodology
1. Source Selection
- PyTorch codebase for production-quality Python code
- Filtered files: 1KB - 200KB size range
2. Symbol Extraction
# Tree-sitter for precise symbol locations
tree = parser.parse(source_code)
symbols = extract_identifiers(tree) # Functions, classes, variables
# Jedi for semantic analysis
script = jedi.Script(code=source_code, path=file_path)
definitions = script.goto(line, column, follow_imports=True)
type_info = script.complete(line, column)
3. Semantic Description Generation
def create_semantic_description(symbol_info):
description = f"name: {symbol.name}. kind: {symbol.type}. type: {symbol.type_hint}."
if symbol.definition:
description += f" definition: {symbol.definition}."
if symbol.docstring:
description += f" description: {symbol.docstring[:100]}."
return description
4. Embedding and Alignment
# Generate embeddings
semantic_embeddings = embedding_model.encode(descriptions)
# Align to tokens using Tree-sitter locations
token_embeddings = align_symbols_to_tokens(
symbols, semantic_embeddings, tokenizer_output
)
π Model Training Example
from transformers import AutoTokenizer, AutoModel, Trainer, TrainingArguments
import torch.nn as nn
class SemanticCodeModel(nn.Module):
def __init__(self, base_model_name, semantic_dim=1024):
super().__init__()
self.base_model = AutoModel.from_pretrained(base_model_name)
self.semantic_projection = nn.Linear(semantic_dim, self.base_model.config.hidden_size)
self.cross_attention = nn.MultiheadAttention(
self.base_model.config.hidden_size, num_heads=8
)
def forward(self, input_ids, semantic_embeddings, semantic_positions, **kwargs):
# Base language model
outputs = self.base_model(input_ids, **kwargs)
hidden_states = outputs.last_hidden_state
# Project semantic embeddings
semantic_proj = self.semantic_projection(semantic_embeddings)
# Apply semantic mask
masked_semantic = semantic_proj * semantic_positions.unsqueeze(-1)
# Cross-attention fusion
enhanced_states, _ = self.cross_attention(
hidden_states, masked_semantic, masked_semantic
)
return enhanced_states
# Data collator
class SemanticDataCollator:
def __init__(self, tokenizer):
self.tokenizer = tokenizer
def __call__(self, batch):
# Pad sequences and create batch tensors
max_len = max(len(item["input_ids"]) for item in batch)
# Implement padding logic for all fields
# ... (see full implementation in repository)
# Training setup
model = SemanticCodeModel("Qwen/Qwen3-0.6B")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
data_collator = SemanticDataCollator(tokenizer)
training_args = TrainingArguments(
output_dir="./semantic-code-model",
learning_rate=5e-5,
per_device_train_batch_size=4,
num_train_epochs=3,
warmup_steps=100,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=data_collator,
)
trainer.train()
π οΈ Requirements
# Core dependencies
pip install datasets transformers torch
pip install tree-sitter tree-sitter-python
pip install jedi sentence-transformers
pip install numpy huggingface-hub
π Citation
If you use this dataset in your research, please cite:
@dataset{pytorch_semantic_dataset_2024,
title={PyTorch Semantic Code Dataset: Syntactic Tokenization with Semantic Enrichment},
author={Antoine Descamps},
year={2025},
url={https://huggingface.co/datasets/ant-des/pytorch-semantic-dataset-fixed},
note={A semantically-enriched Python code dataset combining Tree-sitter and Jedi analysis}
}
π License
This dataset is released under MIT.
Note: The source code is from the PyTorch project, which is licensed under BSD-3-Clause. This dataset contains processed representations of that code for research purposes.
π Dataset Card
Dataset Summary
This dataset provides semantically-enriched Python code samples where each token is augmented with semantic information extracted through static analysis. It enables training of code models that can leverage both syntactic and semantic understanding.
Supported Tasks
- Code completion with semantic awareness
- Type inference and prediction
- Symbol resolution and cross-referencing
- Code summarization and documentation
- Semantic code search and retrieval
Languages
- Programming Language: Python
- Natural Language: English (for documentation and comments)
Data Source
- PyTorch codebase (BSD-3-Clause licensed)
- Processed using Tree-sitter and Jedi static analysis tools
Personal and Sensitive Information
This dataset contains only source code and does not include personal or sensitive information.
- Downloads last month
- 184