|
--- |
|
{} |
|
--- |
|
|
|
# Model Card for PygKiCOTlion |
|
|
|
PygKiCOTlion is a a lora merge of Pygmalion-2-13b-SuperCOT + Kimiko v2 |
|
|
|
## Model Details |
|
|
|
*Q: **"Why do you do this?!"*** |
|
*A: **Was bored.*** |
|
|
|
### Model Description |
|
|
|
|
|
- **Developed by:** KaraKaraWitch (Merge), kaiokendev (Original SuperCOT LoRA), nRuaif (Kimiko v2 LoRA), kingbri (Pygmalion 2 13b SuperCOT) |
|
- **Model type:** Decoder only |
|
- **License:** LLaMA2 (PygKiCOTlion), SuperCOT (MIT), Kimiko v2 (CC BY-NC-SA (?)) |
|
- **Finetuned from model [optional]:** LLaMA2 |
|
|
|
### Model Sources [optional] |
|
|
|
- [Pygmalion 2 13b SuperCOT](https://huggingface.co/royallab/Pygmalion-2-13b-SuperCOT) |
|
- [SuperCOT LoRA](https://huggingface.co/kaiokendev/SuperCOT-LoRA) |
|
- [Kimiko v2 LoRA](https://huggingface.co/nRuaif/Kimiko-v2-13B) |
|
|
|
|
|
## Uses |
|
|
|
YYMV. |
|
|
|
### Direct Use |
|
|
|
## Usage: |
|
|
|
Since this is a merge between Pygmalion 2 13b SuperCOT and Kimiko v2, the following instruction formats should work: |
|
|
|
Metharme: |
|
|
|
``` |
|
<|system|>Your system prompt goes here.<|user|>Are you alive?<|model|> |
|
``` |
|
|
|
Alpaca: |
|
|
|
``` |
|
### Instruction: |
|
Your instruction or question here. |
|
### Response: |
|
``` |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
YMMV. This is untested territory. |
|
|
|
### Testing Feedbakc |
|
|
|
Notes from KaraKaraWitch: |
|
|
|
- The model feels weirdly loopy compared to MythKiCOTlion at lower temps. |
|
- Higher temps the model tries to venture out of it's comfort zone at the cost of making it not stick to the model card as close as expected. |
|
|
|
## Training Details |
|
|
|
N/A. Refer to the respective LoRa's and models. |