basilepp19 commited on
Commit
d7df74a
1 Parent(s): 2f0bcca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
  license: bigscience-bloom-rail-1.0
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: bigscience-bloom-rail-1.0
3
+ language:
4
+ - it
5
  ---
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+ This model is obtained by adapting bloom-1b7 to the Italian language. Among the languages supported by the BLOOM model, there is no Italian, making its use
11
+ in that context challenging. We adapt the original BLOOM model using the MAD-X language adaptation strategy.
12
+ Then, the adapted model is fine-tuned over an Italian translation of the dolly dataset and two classification task prompts. To deal with this step, we decided to use data
13
+ from two well-known EVALITA tasks: AMI2020 (misogyny detection) and HASPEEDE-v2-2020 (hate-speech detection).
14
+
15
+ ## Model Details
16
+
17
+ ### Model Description
18
+
19
+ We adapt the bloom-1b7 to the Italian language using the MAD-X language adaptation strategy.
20
+ To produce a valuable model, we follow the same procedure proposed in: https://arxiv.org/abs/2212.09535
21
+
22
+ We use default script parameters and select a sample of 100,000 examples in the Italian language. We decided to sample data from the Filtered Oscar Dataset for
23
+ the Italian Language released by Sarti.
24
+
25
+ Then, the adopted model is fine-tuned over an Italian translation of the dolly dataset and two classification task prompts using two well-known EVALITA tasks:
26
+ AMI2020 (misogyny detection) and HASPEEDE-v2-2020 (hate-speech detection).
27
+
28
+ We transformed the training data of the two tasks into an LLM prompt following a template. For the AMI task, we used the following template:
29
+
30
+ *instruction: Nel testo seguente si esprime odio contro le donne? Rispondi sì o no., input: \<text\>, output: \<sì/no\>.*
31
+
32
+ Similarly, for HASPEEDE we used:
33
+
34
+ *instruction: “Il testo seguente incita all’odio? Rispondi sì o no., input: \<text\>, output: \<sì/no\>.*
35
+
36
+ To fill these templates, we mapped the label "1" with the word "sì" and the label "0" with the word "no", \<text\> is just the sentence from the
37
+ dataset to classify.
38
+
39
+ The dolly dataset is automatically translated into Italian using an open-source machine translation tool: https://pypi.org/project/argostranslate/
40
+
41
+ To fine-tune the adapted model, we use the script available here: https://github.com/hyintell/BLOOM-fine-tuning/tree/main
42
+
43
+ - **Developed by:** Pierpaolo Basile, Pierluigi Cassotti, Marco Polignano, Lucia Siciliani, Giovanni Semeraro. Department of Computer Science, University of Bari Aldo Moro, Italy
44
+ - **Model type:** BLOOM
45
+ - **Language(s) (NLP):** Italian
46
+ - **License:** BigScience BLOOM RAIL 1.0
47
+
48
+ ## Citation
49
+
50
+ Pierpaolo Basile, Pierluigi Cassotti, Marco Polignano, Lucia Siciliani, Giovanni Semeraro. On the impact of Language Adaptation for Large Language Models: A
51
+ case study for the Italian language using only open resources. Proceedings of the Ninth Italian Conference on Computational Linguistics (CLiC-it 2023).
52
+
53
+