ConvoAI

Model Card for Model ID

Model Details

  • Architecture: Transformer decoder
  • Number of layers: 32
  • Hidden size: 1024
  • Number of attention heads: 32
  • Context window: 1024 tokens
  • Vocabulary size: 50,257
  • Training objective: Causal Conversation
  • Tokenizer: GPT-2 Byte-Pair Encoding

Model Description

ConvoAI is a fully custom conversational AI model trained from scratch on the DailyDialog dataset. Built using transformer architecture, this model does not rely on any pretrained weights, making it unique and tailored specifically for generating human-like dialogues. It learns to produce coherent and contextually appropriate responses in multi-turn conversations.

The model uses the GPT-2 tokenizer and was trained with a causal Conversation objective. It performs well on casual conversations and is suitable for chatbot applications, dialogue system research, and educational purposes. While powerful within the domain of DailyDialog, its general knowledge and open-domain capabilities are limited due to its focused training scope. It's not pre-trained well.

Model Sources

https://huggingface.co/spaces/GBhaveshKumar/ConvoAI

Limitations

  • The model was trained only on the DailyDialog dataset and not on large-scale corpora like Wikipedia or Common Crawl, it lacks general world knowledge and factual accuracy on many topics.

  • With a limited training sequence length, the model might struggle to maintain coherence in very long conversations or when asked to recall information from earlier in the dialogue.

  • The model doesn’t learn from new interactions unless it is retrained. It cannot update its knowledge or remember prior conversations between sessions.

  • On CPU-only systems, generation can be slow.

  • The model is not fine tuned.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("GBhaveshKumar/ConvoAI")
tokenizer = AutoTokenizer.from_pretrained("GBhaveshKumar/ConvoAI")

Training Data

Dataset: https://huggingface.co/datasets/roskoN/dailydialog

Training Hyperparameters

  • Model type: Transformer

  • Tokenizer: GPT-2 tokenizer

  • Dataset: DailyDialog

  • Number of Epochs: 15

  • Maximum Sequence Length: 256

  • Batch Size: 16

  • Embedding Size (n_embd): 1024

  • Number of Layers (n_layer): 32

  • Number of Attention Heads (n_head): 32

  • Sampling Strategy:

    Top-k: 50

    Top-p (nucleus sampling): 0.95

    Temperature: 0.8

Downloads last month
94
Safetensors
Model size
456M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train GBhaveshKumar/ConvoAI

Space using GBhaveshKumar/ConvoAI 1