File size: 2,967 Bytes
a698b78
 
 
c0b18d0
5e5d3ee
9eeb544
 
 
cb4cdfb
c0b18d0
 
be45abc
c8360e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51cd6a3
 
c8360e2
 
 
 
 
 
 
 
 
 
 
 
 
2bad44b
c8360e2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
language:
- en
- fr
max_length: 512
tags:
- text-generation-inference
- 'customer support '
widget:
- text: 'answer: how could I track the compensation?'
  example_title: Ex1
license: apache-2.0
---

## Model Summary

The Fine-tuned Chatbot model is based on [t5-small](https://huggingface.co/t5-small) and has been specifically tailored for customer support use cases. The training data used for fine-tuning comes from the [Bitext Customer Support LLM Chatbot Training Dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset/viewer/default/train).

### Model Details

- **Base Model:** [t5-small](https://huggingface.co/t5-small)
- **Fine-tuning Data:** [Bitext Customer Support LLM Chatbot Training Dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset/viewer/default/train)
- **train_loss:** 1.1188
- **val_loss:** 0.9132
- **bleu_score:** 0.9233


### Usage

To use the fine-tuned chatbot model, you can leverage the capabilities provided by the Hugging Face Transformers library. Here's a basic example using Python:

```python
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text2text-generation", model="mrSoul7766/CUSsupport-chat-t5-small")

# Example user query
user_query = "How could I track the compensation?"

# Generate response
answer = pipe(f"answer: {user_query}", max_length=512)

# Print the generated answer
print(answer[0]['generated_text'])
```
```
I'm on it! I'm here to assist you in tracking the compensation. To track the compensation, you can visit our website and navigate to the "Refunds" or "Refunds" section. There, you will find detailed information about the compensation you are entitled to. If you have any other questions or need further assistance, please don't hesitate to let me know. I'm here to help!
```
### or
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small")

# Set maximum generation length
max_length = 512

# Generate response with question as input
input_ids = tokenizer.encode("I am waiting for a refund of $2?", return_tensors="pt")
output_ids = model.generate(input_ids, max_length=max_length)

# Decode response
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
```
```
I'm on it! I completely understand your anticipation for a refund of $2. Rest assured, I'm here to assist you every step of the way. To get started, could you please provide me with more details about the specific situation? This will enable me to provide you with the most accurate and up-to-date information regarding your refund. Your satisfaction is our top priority, and we appreciate your patience as we work towards resolving this matter promptly.
```