Send
To find the result of 8.8-8.11, we simply perform the subtraction:
8.8 - 8.11 = 0.69
So, 8.8-8.11 equals 0.69.
# Limitations
This model is not uncensored, yet it may produce erotic outputs. You are solely responsible for the outputs from the model.
Like any other LLM, users and hosters alike should be aware that AI language models may hallucinate and produce inaccurate, dangerous, or even completly nonsensical outputs, all the information the model provides may seem accurate, but please, for important tasks always double check responses with credible sources.
# Notices
This was the mergekit YAML config we used:
```yaml
base_model: Qwen/Qwen2.5-1.5B-Instruct
merge_method: passthrough
slices:
- sources:
- model: Qwen/Qwen2.5-1.5B-Instruct
layer_range: [0, 21] # Lower layers
- sources:
- model: Qwen/Qwen2.5-Coder-1.5B-Instruct
layer_range: [8, 10] # Better coding performance
- sources:
- model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated
layer_range: [5, 24] # Mid layers
- sources:
- model: Unsloth/Qwen2.5-1.5B-Instruct
layer_range: [14, 28] # Higher layers
tokenizer_source: unsloth/Qwen2.5-1.5B-Instruct
dtype: bfloat16
```
# Uploaded model
- **Developed by:** Pinkstack
- **License:** Apache 2.0
- **Finetuned from model :** Pinkstack/Fijik-3b-v1-sft
This Qwen2.5 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
# Citations
Magpie:
```
{
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Lion:
```
{
title={Symbolic Discovery of Optimization Algorithm},
author={Xiangning Chen},
year={2023},
eprint={2302.06675},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```