File size: 4,741 Bytes
2765a34
 
 
 
 
 
 
 
 
da1512c
2765a34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4abc86c
2765a34
 
775ccb6
2765a34
 
 
 
 
8144c01
 
2765a34
 
 
 
 
 
 
 
 
 
2dae9b9
2765a34
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a4ecd5
2765a34
 
611284d
 
 
 
 
 
 
 
da1512c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: gpl-3.0
datasets:
- JoseEliel/lagrangian_generation
pipeline_tag: text2text-generation
tags:
- Physics
- Math
- Lagrangian
library_name: transformers
---
## Model Summary

BART-Lagrangian is a sequence-to-sequence Transformer (BART-based) specifically trained to generate particle physics Lagrangians from textual descriptions of fields, spins, and gauge symmetries. Unlike typical language models, BART-Lagrangian focuses on the symbolic structure of physics, aiming to produce coherent and accurate Lagrangian terms given customized tokens representing field types, spins, helicities, gauge groups (SU(3), SU(2), U(1)), and more.

Key Highlights:

• BART architecture with sequence-to-sequence pretraining  
• Custom tokenization scheme capturing field quantum numbers and contractions  
• Specialized training on a large corpus of symbolic physics data  

BART-Lagrangian is well suited for research and experimentation in symbolic physics or any domain requiring structured symbolic generation.

--------------------------------------------------------------------------------

## Usage

You can use BART-Lagrangian directly with the Hugging Face Transformers library:

1) Install prerequisites (for example with pip):  
   pip install transformers torch

2) Load the model and tokenizer:

```python
from transformers import BartForConditionalGeneration, PreTrainedTokenizerFast

model_name = "JoseEliel/BART-Lagrangian"
model = BartForConditionalGeneration.from_pretrained(model_name)
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_name)
```

3) Prepare your input. Below is a simple example describing two fields, a scalar with charges (1, 2, 1) and a fermion with -1/2 Helicity with charges (3, 2, 1/3) under (SU(3), SU(2), U(1)):

```python
input_text = "[SOS] FIELD SPIN 0 SU2 2 U1 1 FIELD SPIN 1 / 2 SU3 3 SU2 2 U1 1 / 3 HEL - 1 / 2 [EOS]"
```

4) Perform generation:

```python
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs['input_ids'], max_length=2048)
decoded_outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print("Generated Lagrangian:")
print(decoded_outputs[0])
```

--------------------------------------------------------------------------------

## Evaluation

BART-Lagrangian has been evaluated on both:

• Internal test sets of symbolic Lagrangians to measure consistency and correctness.  
• Human inspection by domain experts to confirm the generated Lagrangian terms align with expected physics rules (e.g., correct gauge symmetries, valid contractions).

For more details on benchmarks and accuracy, refer to our upcoming paper “Generating Particle Physics Lagrangians with Transformers” (arXiv link placeholder).

--------------------------------------------------------------------------------

## Limitations

• Domain Specificity: BART-Lagrangian is specialized for Lagrangian generation; it may not perform well on unrelated language tasks.  
• Input Format Sensitivity: The model relies on a specific tokenized format for fields and symmetries. Incorrect or incomplete tokenization can yield suboptimal or invalid outputs.  
• Potential Redundancy: Some generated Lagrangians can contain redundant terms, as non-redundant operator filtering was beyond the scope of initial training.  
• Context Length Limit: The default generation max_length is 2048 tokens, which may be insufficient for extremely large or highly complex expansions.

--------------------------------------------------------------------------------

## Training

• Architecture: BART, sequence-to-sequence Transformer with approximately 357M parameters.  
• Data: A large corpus of synthetically generated Lagrangians using a custom pipeline (AutoEFT + additional code).  
• Objective: Conditioned generation of invariant terms given field tokens, spins, and gauge group embeddings.  
• Hardware: Trained on an A100 GPU, leveraging standard PyTorch and Transformers libraries.  

For more technical details, see the forthcoming paper cited above.

--------------------------------------------------------------------------------

## License

The model, code, and weights are provided under the AGPL-3.0 license.

--------------------------------------------------------------------------------

## Citation

If you use BART-Lagrangian in your work, please cite it as follows:

```bibtex
@misc{BARTLagrangian,
      title={Generating particle physics Lagrangians with transformers}, 
      author={Yong Sheng Koay and Rikard Enberg and Stefano Moretti and Eliel Camargo-Molina},
      year={2025},
      eprint={2501.09729},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2501.09729}, 
}