File size: 1,574 Bytes
926472a
 
6221a73
 
 
 
 
926472a
 
1a12a21
926472a
 
 
 
 
 
 
 
 
 
791196e
926472a
beb72f6
 
926472a
 
 
 
 
7f7ff8a
926472a
 
 
 
 
 
 
 
 
8c66b65
 
 
 
 
 
 
926472a
 
 
 
 
 
 
 
 
 
 
 
 
f1cc042
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
library_name: transformers
license: apache-2.0
language:
- hu
base_model:
- google/mt5-large
---

# Model Card for mT5-large-HuAMR

<!-- Provide a quick summary of what the model is/does. -->


## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/mt5-large) on the translated AMR 3.0 and HuAMR dataset. It achieves the following results on the evaluation set:

- **Model type:** Abstract Meaning Representation parser
- **Language(s) (NLP):** Hungarian

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [GitHub Repo](https://github.com/botondbarta/HuAMR)

## Training Details

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Training Hyperparameters

- learning_rate: 5e-05
- train_batch_size: 1
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: AdamW
- lr_scheduler_type: linear
- max_grad_norm: 0.3

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

## Framework versions
 - Transformers 4.34.1
 - Pytorch 2.3.0+cu118
 - Datasets 2.19.0
 - Tokenizers 0.19.1