File size: 2,518 Bytes
5c43667
 
 
 
 
 
 
 
 
 
a0ca9f6
7830ccc
 
 
 
 
 
 
5c43667
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: cc-by-4.0
datasets:
- tatsu-lab/alpaca
- Open-Orca/OpenOrca
language:
- en
metrics:
- accuracy
library_name: transformers
widget:
- text: ''
  Instruction: You are a respectful, friendly, helpful assistant.
  Question: What is the capital of France?
- text: I am a Robot and I
- text: The Universe is
- text: The doom of AI will start when
pipeline_tag: text-generation
---


## Model Description

This conversational QA model is developed by Aditya Bavadekar. It is built upon the GPT-2 architecture, finetuned for the specific task of answer to the given questions.

- **Model Type:** Conversational (Question to Answer)
- **Language(s) (NLP):** English
- **Model Name:** gpt2-medium-finetuned-qamodel
- **Fine-tuned from Model:** gpt2

## Model Sources

- **Repository:** [GitHub Repository](https://github.com/AdityaBavadekar/AIMyData/blob/master/gpt2_medium_finetuned_qa_model.ipynb)
- **Colab Run:** [Google Colab Notebook](https://colab.research.google.com/drive/1_KEBloh68mzMP2PCBfJwcdX_hgzqjHhN?usp=sharing)

## Bias, Risks, and Limitations

This model is currently in the testing phase and is likely to exhibit biases. Caution should be exercised when using the model's outputs for critical tasks.

## Getting Started

You can quickly begin using the model by loading it or the associated pipeline using the `transformers` library:

```python
from transformers import pipeline

generator = pipeline("text-generation", model="AdityaBavadekar/gpt2-medium-finetuned-qamodel")
```

Here are some recommended configurations that work well:

```python
BEAM_SIZE = 5
TEMPERATURE = 2.0
MAX_LENGTH = 200
NOR_NGRAM = 2
PROMPT = """
Instruction: You are a respectful, friendly, helpful assistant.

Question : What is your name?
"""
generated_text = generator(
    PROMPT,
    max_length=MAX_LENGTH,
    temperature=TEMPERATURE,
    num_beams=BEAM_SIZE,
    no_repeat_ngram_size=NOR_NGRAM,
    early_stopping=True
)[0]["generated_text"]

print(generated_text)
```

## Prompt Format

To interact with the model, use the following prompt format:

```
Instruction: [instruction_here]

Question : [question_here]
```

## Training Details

- Final QA Dataset Size: 57,283 Samples
- GPUs: 1 (Tesla T4)
- Learning Rate: 3e-4
- Epochs: 3 (Training was halted prematurely due to time constraints)

Please do note that this model card provides an overview of the conversational QA model and guides on how to use it effectively. Keep in mind the model's limitations and potential biases while interpreting its outputs.