Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
iperbole commited on
Commit
a5075d9
·
verified ·
1 Parent(s): 5cf3ba4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -1
README.md CHANGED
@@ -47,4 +47,48 @@ The original game itself is not well-posed, the solution is not unique, and list
47
  For each game the three distractor was chosen among all the possible italian words, the distractor was chosen to be aligned with 3 out of 5 hints and distant to the other ones (computing the cosine similarity in FastTest static embeddings).
48
  Moreover, the distractors was chosen to have lenght at most len(solution) + 1.
49
 
50
- With this setting, we created three different words that are not the possible solution of the game, making a task relativelly simple to be solved by humans, but not that much for Language Models.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  For each game the three distractor was chosen among all the possible italian words, the distractor was chosen to be aligned with 3 out of 5 hints and distant to the other ones (computing the cosine similarity in FastTest static embeddings).
48
  Moreover, the distractors was chosen to have lenght at most len(solution) + 1.
49
 
50
+ With this setting, we created three different words that are not the possible solution of the game, making a task relativelly simple to be solved by humans, but not that much for Language Models.
51
+
52
+ ## Example
53
+
54
+ Here you can see the structure of the single sample in the present dataset.
55
+
56
+
57
+ ```json
58
+ {
59
+ "w1": string, # text of the first hint
60
+ "w2": string, # text of the second hint
61
+ "w3": string, # text of the third hint
62
+ "w4": string, # text of the fourth hint
63
+ "w5": string, # text of the fifth hint
64
+ "choices": list, # list of possible words, with the correct one plus 3 distractors
65
+ "label": int, # index of the correct answer in the choices
66
+ }
67
+ ```
68
+
69
+ ## Statistics
70
+
71
+ Training: -
72
+
73
+ Test: -
74
+
75
+ ## Proposed Prompts
76
+ Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
77
+ Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
78
+
79
+ Description of the task:
80
+ ```txt
81
+ ```
82
+
83
+ Prompt:
84
+ ```txt
85
+ ```
86
+
87
+ ## Some Results
88
+
89
+ | QUANDHO | ACCURACY |
90
+ | :--------: | :----: |
91
+ | Mistral-7B | 0 |
92
+ | ZEFIRO | 0 |
93
+ | Llama-3 | 0 |
94
+ | ANITA | 0 |