File size: 3,478 Bytes
49c453e 53c183b 080775c 53c183b 080775c 0e97338 080775c 0e97338 080775c 8789475 080775c 8789475 080775c 8789475 0e97338 8789475 080775c 49c453e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
---
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
tags:
- text-generation
- code-assistant
- a3on
- A3ON
- Kaiiddo
---
```yaml
---
language: en
license: mit
library_name: transformers
tags:
- text-generation
- code-assistant
- a3on
- kaiiddo
- 1b-parameter
datasets: []
model-index: []
---
```
# A3ON-1B - Enhanced AI Assistant ๐ค
## Model Overview
Welcome to **A3ON-1B**, the enhanced version of the A3ON AI assistant! With **1.1 billion parameters**, this model is designed to provide significantly improved capabilities over the original 124M parameter model. Whether you need help with conversational tasks or code generation, A3ON-1B is here to assist you!
## Key Features
- **Enhanced Intelligence**: With 1.1B parameters, A3ON-1B offers more sophisticated understanding and responses. ๐ง
- **Code Generation**: Get advanced programming assistance and code completion. ๐ป
- **Conversational Intelligence**: Engage in natural dialogue with seamless understanding and response generation. ๐ฃ๏ธ
- **Context Awareness**: Maintains context across multi-turn conversations for a more coherent interaction. ๐
- **Smart Response Detection**: Automatically distinguishes between coding and general knowledge requests. ๐
## Technical Specifications
| Specification | Details |
|---------------|---------|
| **Architecture** | Transformer-based neural network |
| **Model Type** | Causal language model |
| **Parameters** | 1.1 Billion (1,137,207,296) |
| **Vocabulary Size** | 49,152 tokens |
| **Context Length** | Up to 32,768 tokens |
| **Precision** | FP32/FP16 support |
## Developer Information
- **AI Name**: A3ON-1B
- **Developer**: Kaiiddo
- **Founder**: Aryan Rathod
- **Organization**: Kaiiddo
- **Location**: Gujarat, India ๐ฎ๐ณ
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("kaiiddo/A3ON-1B")
model = AutoModelForCausalLM.from_pretrained("kaiiddo/A3ON-1B")
# Set pad_token_id to eos_token_id to avoid warnings
model.config.pad_token_id = model.config.eos_token_id
# Generate text with adjusted parameters
inputs = tokenizer("Hello, how can I help you today?", return_tensors="pt")
outputs = model.generate(
**inputs,
max_length=500,
do_sample=True,
temperature=0.7,
top_k=50
)
# Decode the output and split into lines
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
response_lines = response.split('\n')
# Print each line of the response
for line in response_lines:
print(line)
```
### Model Parameter Count
| Parameter Type | Count |
|----------------|-------|
| **Total Parameters** | 1.1B (1,137,207,296) |
| **Trainable Parameters** | 1.1B (1,137,207,296) |
| **Non-Trainable Parameters** | 0 |
### Model Architecture
| Architecture Detail | Value |
|---------------------|-------|
| **Model Type** | GPTBigCodeForCausalLM |
| **Context Length** | 8192 tokens |
| **Vocabulary Size** | 49,152 tokens |
| **Embedding Dimension** | 2048 |
| **Number of Layers** | 24 |
| **Number of Attention Heads** | 16 |
### Memory Information
| Memory Detail | Value |
|---------------|-------|
| **Device** | cuda:0 |
| **Estimated Memory Usage** | 4.24 GB (FP32) |
| **GPU** | Tesla T4 |
| **GPU Memory** | 14.7 GB |
### Model Category
- **Category**: Massive Model (1B+)
A3ON-1B is proudly developed in India, tailored to excel in coding assistance and beyond. ๐ |