|
‼️ |
|
# This model is a variation of the <span style="color: blue; font-weight: bold;">meta-llama/Llama-3.1-8B-Instruct</span> where all weights and biases are set to *random values with gaussian distribution (mean=0, std=1)* |
|
# It produces random(?) outputs to prompts. While it doesn't serve a practical use, it can be useful for educational purposes |
|
‼️ |
|
|
|
|
|
|
|
### Use with transformers |
|
|
|
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. |
|
|
|
Make sure to update your transformers installation via `pip install --upgrade transformers`. |
|
|
|
```python |
|
import transformers |
|
import torch |
|
|
|
model_id = "atahanuz/RANDOM_Llama-3.1-8B-Instruct" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device_map="auto", |
|
) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, |
|
{"role": "user", "content": "Who are you?"}, |
|
] |
|
|
|
outputs = pipeline( |
|
messages, |
|
max_new_tokens=256, |
|
) |
|
print(outputs[0]["generated_text"][-1]) |
|
|