File size: 3,014 Bytes
9a83412
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51d91e0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: apache-2.0
language:
- aa
- ae
- am
- en
- es
- ar
- ja
- eo
- fr
- ru
pipeline_tag: text-generation
tags:
- nova
- ai
- nlop
- nexiloop
- llama
- llm
- novaai
- ainlop
- nlopai
- nexai
---
# Nexiloop Nova Model: Fully Open Source

**License:** Apache-2.0  
**Datasets:**
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- OpenAssistant/oasst_top1_2023-08-25  
**Language:** English

---

<div align="center">

# Nexiloop Nova-1.1B  
**Open Source and Ready for Use**  
Fully optimized for various applications with a compact architecture.

</div>

[GitHub Repository](https://github.com/mohameodo/nova)

---

The **Nexiloop Nova-1.1B** model is a fine-tuned version of the Llama 2 architecture with **1.1B parameters**. It has been trained on over **3 trillion tokens** and is built to provide high-quality, efficient responses in a wide variety of conversational contexts.

### **Features:**
- **Optimized for Compact Systems:** With just 1.1B parameters, Nexiloop Nova is perfect for applications where memory and computation are limited.
- **Pretraining:** The model has been pre-trained on the **SlimPajama-627B** dataset, fine-tuned for even better conversational abilities.

### **Training Overview:**
We adopted the same architecture and tokenizer as **Llama 2**, which allows Nexiloop Nova to plug into many existing open-source projects. The training, which started on **2023-09-01**, used **16 A100-40G GPUs** to achieve remarkable optimization.

The model was initially fine-tuned on a variant of the **UltraChat** dataset, which consists of synthetic dialogues generated by **ChatGPT**. It was then further aligned using the **DPOTrainer** from **TRL**, utilizing a ranking dataset containing **64k prompts** and responses from **GPT-4**.

---

### **How to Use Nexiloop Nova Model**

To use Nexiloop Nova, you'll need **transformers>=4.34**. Below is a simple example showing how to integrate the model into your application.

#### Example Code:

```bash
# Install necessary libraries
pip install transformers==4.34
pip install accelerate



import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="nexiloop/nova", torch_dtype=torch.bfloat16, device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {
        "role": "system",
        "content": "You are a friendly chatbot who always responds in the style of a pirate",
    },
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```