Safetensors
File size: 5,446 Bytes
430efd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
---
license: apache-2.0
datasets:
- HuggingFaceTB/smollm-corpus
- HuggingFaceFW/fineweb-edu
---

# MedIT One – 140M Checkpoint (Fifth Checkpoint After 9B Tokens)

**Repository:** [MedITSolutionsKurman/medit-one](https://github.com/MedITSolutionsKurman/medit-one)

**Model Type:** Causal Language Model (OneForCausalLM)

**Checkpoint:** 140M parameters, fifth checkpoint after 9B tokens

**Tokenizer** [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)

---

## Model Overview

The MedIT One model is an early checkpoint in the development of the One series, evaluated after 9 billion tokens of training. 
It is designed for natural language generation tasks and is implemented with a focus on high performance on causal language modeling. 
This checkpoint contains 140 million parameters and is built using PyTorch with support for `bfloat16` precision, making it suitable for GPU-accelerated inference.

---

## Intended Use

- **Primary Applications:** Natural language generation, research experiments, and prompt completion tasks.
- **Research:** This model checkpoint is provided as an early checkpoint and can be used for studying model behaviors, especially regarding repetitive generation.
- **Prototyping:** Developers and researchers can use this checkpoint to explore early results and understand the evolution of the Medit series.

**Caution:** As an early checkpoint, the model tends to exhibit repetitive generation. Users should set the repetition penalty (recommended value: 1.2) during inference to mitigate this behavior.

---

## Installation

```bash
# From source (without CUDA acceleration)
git clone https://github.com/MedITSolutionsKurman/medit-one
cd medit-one
pip install -e .

# From source with CUDA acceleration
python install_cuda.py

# For training capabilities only
pip install -e ".[training]"

# For full installation with all features including CUDA acceleration
pip install -e ".[full]"
```


## How to Use

After installing the `medit-one` package from the repository, the model can be loaded and run with the following code snippet:

```python
import sys
import os
import warnings

import torch
from tqdm import tqdm
import numpy as np
from transformers import AutoTokenizer, TextStreamer

from one.modeling_one import OneForCausalLM

# Set the model checkpoint path
path = 'meditsolutions/medit-one-140M-9B-tokens-checkpoint'

# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(path)
model = OneForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16)

device = 'cuda'
model.to(device)

text = 'The role of artificial intelligence'

# Tokenize input text
tokens = tokenizer(text, return_tensors='pt')
tokens.to(device)

from time import time

start = time()

# Inference with recommended repetition penalty
with torch.autocast(device_type=device, dtype=torch.bfloat16):
    with torch.no_grad():
        model.eval()
        output = model.generate(
            **tokens,
            max_new_tokens=1024,
            streamer=TextStreamer(tokenizer),
            do_sample=None,
            temperature=None,
            repetition_penalty=1.2,
            use_cache=True,
            output_attentions=False,
            eos_token_id=model.config.eos_token_id if model.config.eos_token_id is not None else tokenizer.eos_token_id
        )

end = time()
tokens_per_sec = len(output[0]) / (end - start)
print(f'Time taken: {end - start} seconds, tokens per s: {tokens_per_sec}')
```

**Note:** When using this checkpoint, it is essential to apply a repetition penalty of 1.2 to help control the model’s tendency toward repetitive text generation.

---

## Model Details

- **Parameters:** 140M (early checkpoint)
- **Training Tokens:** Evaluated after 9B tokens
- **Precision:** Supports `bfloat16` for accelerated computation on compatible hardware
- **Architecture:** Causal language model implemented in PyTorch, part of the MedIT One series

---

## Limitations & Considerations

- **Repetition:** This early checkpoint is known to produce repetitive outputs. Adjusting the repetition penalty (recommended: 1.2) is necessary to reduce this effect.
- **Early Checkpoint Status:** As a checkpoint from an early stage of training, performance and fluency might be lower compared to later, more refined checkpoints.
- **Usage Recommendations:** Best suited for research and experimental purposes rather than production deployment without further fine-tuning.

---

## Training Data & Methodology

While detailed documentation on the training dataset and methods is available in the repository, this checkpoint represents an intermediate stage of training after 9B tokens. Users interested in the training process, dataset specifics, and additional checkpoints are encouraged to consult the [repository documentation](https://github.com/MedITSolutionsKurman/medit-one).

---

## Citation

If you use the Medit One model in your research or applications, please cite the repository:

```
@misc{medit-one,
  author = {MedITSolutionsKurman},
  title = {MedIT One},
  year = {202X},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/MedITSolutionsKurman/medit-one}},
}
```

---

## Additional Information

For more details on installation, model training, and updates, please refer to the repository's README and documentation. Contributions and feedback are welcome from the community.