|
--- |
|
library_name: transformers |
|
tags: |
|
- argumentation |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# ADBL2-Mistral-7B |
|
|
|
ADBL2-Mistral-7B is an fine-tuned version of Mistral-7B-v0.1 trained to perform relation-based argument mining. |
|
Giving two arguments *x* and *y*, we use this model in synergy with [LMQL](https://lmql.ai/) to predict wether *y* is attacking or supporting *x*. |
|
|
|
## Fine-tuning |
|
We fine-tunde [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the PEFT method [QLoRA](https://arxiv.org/abs/2305.14314) and argument pairs coming from the online debate tool [Kialo](https://www.kialo.com/). |
|
|
|
## Prompt format |
|
This model has been trained to complete this prompt format: |
|
``` |
|
<s>[INST] |
|
Argument 1 : /*Argument 1*/ |
|
Argument 2 : /*Argument 2*/ |
|
[/INST] |
|
Relation : |
|
``` |
|
by the relation **attack** or **support** |
|
``` |
|
Relation : attack/support |
|
</s> |
|
``` |
|
### Example : |
|
Giving two arguments, where argument 2 is attacking the argument 1, : |
|
- Argument 1 : using machines is advantageous |
|
- Argument 2 : the usage of machines is harmful for health of humans |
|
|
|
The prompt to retrieve the relation between the second and the first argument should be : |
|
``` |
|
<s>[INST] |
|
Argument 1 : using machines is advantageous |
|
Argument 2 : the usage of machines is harmful for health of humans |
|
[/INST] |
|
Relation : |
|
``` |
|
Our model should complete this prompt this way : |
|
``` |
|
<s>[INST] |
|
Argument 1 : using machines is advantageous |
|
Argument 2 : the usage of machines is harmful for health of humans |
|
[/INST] |
|
Relation : attack |
|
</s> |
|
``` |