Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -76,4 +76,47 @@ train-eval-index:
|
|
76 |
train/eval: train/eval
|
77 |
size_categories:
|
78 |
- 100K<n<1M
|
79 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
train/eval: train/eval
|
77 |
size_categories:
|
78 |
- 100K<n<1M
|
79 |
+
---
|
80 |
+
|
81 |
+
## Dataset Description
|
82 |
+
|
83 |
+
The **Particle Physics Lagrangian Dataset** was created to train a BART model for generating Lagrangians from particle fields and their symmetries. This task supports research in field theories within particle physics.
|
84 |
+
|
85 |
+
### Data Generation
|
86 |
+
|
87 |
+
The dataset is generated through a pipeline utilizing AutoEFT, which helps automate the creation of effective field theories (EFTs). This tool is crucial for creating invariant terms based on specified fields and symmetries.
|
88 |
+
|
89 |
+
### Dataset Sampling
|
90 |
+
|
91 |
+
Due to the vast space of possible Lagrangians, careful sampling is essential:
|
92 |
+
|
93 |
+
1. **Uniform Dataset**: Provides evenly distributed Lagrangians for validation.
|
94 |
+
2. **Sampled Dataset**: Focuses on extreme cases to optimize learning, based on insights from natural language processing.
|
95 |
+
|
96 |
+
#### Key Features
|
97 |
+
|
98 |
+
- **Field Count**: Skews towards simpler Lagrangians with fewer fields.
|
99 |
+
- **Spin Types**: Includes a balanced mix of scalars and fermions.
|
100 |
+
- **Gauge Groups**: Uses SU(3), SU(2), and U(1) representations.
|
101 |
+
- **Trilinear Interaction Enrichment**: Includes crucial interaction terms fundamental to particle physics.
|
102 |
+
|
103 |
+
### Data Fields
|
104 |
+
|
105 |
+
- **fields**: List of input fields identified by their quantum numbers.
|
106 |
+
- **Lagrangian**: The corresponding Lagrangian for the input fields.
|
107 |
+
- **train/eval**: A flag describing whether the datapoint was used for training or evaluation.
|
108 |
+
|
109 |
+
### Encoding scheme
|
110 |
+
|
111 |
+
To facilitate understanding by the transformer model, the dataset undergoes a custom tokenization process that preserves the essential information of fields and Lagrangians:
|
112 |
+
|
113 |
+
- Fields and derivatives are tokenized to encapsulate quantum numbers, spins, and gauge symmetries.
|
114 |
+
- Key interactions are represented through positional tokens.
|
115 |
+
- Tokenization ensures all necessary contraction and symmetry details are conveyed.
|
116 |
+
|
117 |
+
For further details on the methods and theoretical underpinnings of this work, please refer to the paper "Generating Particle Physics Lagrangians with Transformers" [arXiv:xxxx.xxxxx](https://arxiv.org/abs/xxxx.xxxxx).
|
118 |
+
|
119 |
+
|
120 |
+
### Usage
|
121 |
+
|
122 |
+
Ideal for sequence-to-sequence tasks, this dataset is optimized for training transformer models to derive particle physics Lagrangians efficiently.
|