Tianlin668 commited on
Commit
9f194a0
·
1 Parent(s): 6486016

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -1
README.md CHANGED
@@ -9,4 +9,64 @@ tags:
9
  - BART
10
  - text-generation-inference
11
  - Inference Endpoints
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - BART
10
  - text-generation-inference
11
  - Inference Endpoints
12
+ ---
13
+
14
+ # Introduction
15
+
16
+ MentalBART is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for
17
+ interpretable mental health analysis with instruction-following capability. This model is finetuned based on the facebook/bart-large foundation model and the full IMHI instruction tuning data.
18
+ The model is expected to make complex mental health analysis for various mental health conditions and give reliable explanations for each of its predictions.
19
+ It is fine-tuned on the IMHI dataset with 75K high-quality natural language instructions to boost its performance in downstream tasks.
20
+ We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalBART can achieve good performance in correctness and generates explanations.
21
+
22
+ # Ethical Consideration
23
+
24
+ Although experiments on MentalBART show promising performance on interpretable mental health analysis, we stress that
25
+ all predicted results and generated explanations should only used
26
+ for non-clinical research, and the help-seeker should get assistance
27
+ from professional psychiatrists or clinical practitioners. In addition,
28
+ recent studies have indicated LLMs may introduce some potential
29
+ bias, such as gender gaps. Meanwhile, some incorrect prediction results, inappropriate explanations, and over-generalization
30
+ also illustrate the potential risks of current LLMs. Therefore, there
31
+ are still many challenges in applying the model to real-scenario
32
+ mental health monitoring systems.
33
+
34
+ ## Other Models in MentaLLaMA
35
+
36
+ In addition to MentalBART, the MentaLLaMA project includes another model: MentaLLaMA-chat-13B, MentaLLaMA-chat-7B, MentalT5.
37
+
38
+ - **MentaLLaMA-chat-13B**: This model is finetuned based on the Meta LLaMA2-chat-13B foundation model and the full IMHI instruction tuning data. The training data covers 10 mental health analysis tasks.
39
+
40
+ - **MentaLLaMA-chat-7B**: This model is finetuned based on the Meta LLaMA2-chat-7B foundation model and the full IMHI instruction tuning data. The training data covers 10 mental health analysis tasks.
41
+
42
+ - **MentalT5**: This model is finetuned based on the T5-large foundation model and the full IMHI-completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis in a completion-based manner.
43
+
44
+ ## Usage
45
+
46
+ You can use the MentaLLaMA-chat-7B model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:
47
+
48
+ ```python
49
+ from transformers import BartTokenizer, BartModel
50
+ tokenizer = BartTokenizer.from_pretrained('Tianlin668/MentalBART')
51
+ model = BartModel.from_pretrained('Tianlin668/MentalBART')
52
+ ```
53
+
54
+
55
+ ## License
56
+
57
+ MentalBART is licensed under MIT. For more details, please see the MIT file.
58
+
59
+ ## Citation
60
+
61
+ If you use MentalBART in your work, please cite the our paper:
62
+
63
+ ```bibtex
64
+ @misc{yang2023mentalllama,
65
+ title={MentalLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models},
66
+ author={Kailai Yang and Tianlin Zhang and Ziyan Kuang and Qianqian Xie and Sophia Ananiadou},
67
+ year={2023},
68
+ eprint={2309.13567},
69
+ archivePrefix={arXiv},
70
+ primaryClass={cs.CL}
71
+ }
72
+ ```