File size: 4,343 Bytes
0302fdd c220d4f 0302fdd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
---
base_model:
- mistralai/Mixtral-8x7B-v0.1
- antisoc-qa-assoc/uphill-instruct-crest-0.1-e2
- antisoc-qa-assoc/uphill-instruct-clash-e2
- antisoc-qa-assoc/Mixtral-8x7B-Yes-Instruct-LimaRP
library_name: transformers
tags:
- mergekit
- merge
---
# uphill-instruct-crest-e2-clash-e2-lime-faint-try1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* ./Mixtral-8x7B-Yes-Instruct-LimaRP
* ./uphill-instruct-crest-e2-nolime
* ./uphill-pure-clash-0.2-e2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# Faint tecnnique, crest-e2 clash-e1
#
# review:
# - Instruction-following:
# - Swerve:
# - Word choice:
# - Rhythm, cadence:
# - Notes:
# -
#
# - Design:
# The idea here is to cut crush -- formerly the very cornerstone
# of our merges -- completely out. it's very good for word choice
# but crest is, too. The only problem is I seem to remember that
# crest is overfit. So, we make it faint.
#
# Note: nearly two years later I'm trying to bring Mixtral
# back from the dead. There are multiple reasons:
# 1. Mistral-Small is kind of crap and smells like slop.
# Hell, even the comprehension felt weak but maybe that's
# just how I tried to sample it.
# 2. Llama3 hasn't been interesting and is definitely crammed
# with slop.
# 3. Mixtral is probably the least synthetic-trained sounding
# of all the OG models. Even when I tried the Quen shit
# it seemed to be just openai. Mixtral is still sloppy.
#
# So, the pieces that are ours are uphill: non-instruct lora
# being applied to the instruct rawdog without an intermediate
# step.
#
# Obviously we're using pure elemental antisoc loras, hush's shit
# but not her merge because the merges aren't "uphill", as in,
# a lora made with "mixtral non-instruct" applied straight to
# the instruct with loraize.
#
# The notion, which came to me in the middle of the night, is
# to have the hush loras be only barely present layer-wise but
# weighted heavily. Likewise with LimaRP, send uphill from
# doctor-shotgun's qlora straight into mixtral-instruct
#
# My hypothesis is that we should get really fucking close to
# pure-ass mixtral-instruct in terms of attention, but that
# we're weighting really hard not to write like it. I have no
# idea if that's how it works--I'm a fucking caveman.
#
# What I'm given to understand, and I'm way out of my depth,
# is that the antisoc layers won't have blotched the instruct
# as badly as they usually do, but when they're triggered they
# are dominant. It's entirely possible I've got no idea what
# I'm saying.
# Model descriptions:
# - crush: poetry; we have all checkpoints
# - crest: fic; we only have e2 for this
# - clash: novels (I think); we have all checkpoints for 0.2
models:
# I wonder what happens if we just hurl this out the window
# - model: mistralai/Mixtral-8x7B-Instruct-v0.1
# parameters:
# density: 0.9
# weight: 0.55
#
# crest is fic
- model: ./uphill-instruct-crest-e2-nolime
# i found lima in this, I need to cook another
parameters:
density: 0.4
weight: 0.3
# This is actually an uphill lima but I didn't name it that way.
- model: ./Mixtral-8x7B-Yes-Instruct-LimaRP
parameters:
# Still just a breath of layers from the thing
density: 0.2
# I am gimping its weight compared to hush tunes because limarp has too
# much ai-slop and amateur-smut cliche slop. Honestly, if there were
# something better than limarp I'd try to train it myself but I don't
# know if there is.
weight: 0.1
# Pure uphill clash at e2. Also more weight.
- model: ./uphill-pure-clash-0.2-e2
parameters:
density: 0.5
weight: 0.6
# della sucked ass so dare_ties it is
merge_method: dare_ties
# I know all of these look like instruct but the lora
# is actually not so we go to the base base
base_model: mistralai/Mixtral-8x7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
|