mrSoul7766 commited on
Commit
c8360e2
·
verified ·
1 Parent(s): eefa9ed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -1
README.md CHANGED
@@ -10,4 +10,62 @@ widget:
10
  - text: 'answer: how could I track the compensation?'
11
  example_title: Ex1
12
  license: apache-2.0
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - text: 'answer: how could I track the compensation?'
11
  example_title: Ex1
12
  license: apache-2.0
13
+ ---
14
+
15
+ ## Model Summary
16
+
17
+ The Fine-tuned Chatbot model is based on [t5-small](https://huggingface.co/t5-small) and has been specifically tailored for customer support use cases. The training data used for fine-tuning comes from the [Bitext Customer Support LLM Chatbot Training Dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset/viewer/default/train).
18
+
19
+ ### Model Details
20
+
21
+ - **Base Model:** [t5-small](https://huggingface.co/t5-small)
22
+ - **Fine-tuning Data:** [Bitext Customer Support LLM Chatbot Training Dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset/viewer/default/train)
23
+ - **train_loss:** 1.1188
24
+ - **val_loss:** 0.9132
25
+ - **bleu_score:** 0.9233
26
+
27
+
28
+ ### Usage
29
+
30
+ To use the fine-tuned chatbot model, you can leverage the capabilities provided by the Hugging Face Transformers library. Here's a basic example using Python:
31
+
32
+ ```python
33
+ # Use a pipeline as a high-level helper
34
+ from transformers import pipeline
35
+
36
+ pipe = pipeline("text2text-generation", model="mrSoul7766/CUSsupport-chat-t5-small")
37
+
38
+ # Example user query
39
+ user_query = "How could I track the compensation?"
40
+
41
+ # Generate response
42
+ answer = pipe(f"answer: {user_query}", max_length=512)
43
+
44
+ # Print the generated answer
45
+ print(answer[0]['generated_text'])
46
+ ```
47
+ ```
48
+ I'm on it! I'm here to assist you in tracking the compensation. To track the compensation, you can visit our website and navigate to the "Refunds" or "Refunds" section. There, you will find detailed information about the compensation you are entitled to. If you have any other questions or need further assistance, please don't hesitate to let me know. I'm here to help!
49
+ ```
50
+ ### or
51
+ ```python
52
+ # Load model directly
53
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/chat-support-t5-small")
56
+ model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/chat-support-t5-small")
57
+
58
+ # Set maximum generation length
59
+ max_length = 512
60
+
61
+ # Generate response with question as input
62
+ input_ids = tokenizer.encode("I am waiting for a refund of $2?", return_tensors="pt")
63
+ output_ids = model.generate(input_ids, max_length=max_length)
64
+
65
+ # Decode response
66
+ response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
67
+ print(response)
68
+ ```
69
+ ```
70
+ I'm on it! I completely understand your anticipation for a refund of $Refunds. Rest assured, I'm here to assist you every step of the way. To get started, could you please provide me with more details about the specific situation? This will enable me to provide you with the most accurate and up-to-date information regarding your refund. Your satisfaction is our top priority, and we appreciate your patience as we work towards resolving this matter promptly.
71
+ ```