This is a merge of LongAlpaca-70B-lora into Sao10K's Euryale-1.3-L2-70B, replacing the embed and norm layers as described in the LongLoRA repo, and removing the extra row and pad token so that the vocabularies match.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

You could also try merging this with other models of longLORA descendency (like Aurelian).

A 6-bit EXL2 quantization is available here.

See this discussion for how to create merges like these.

Downloads last month
4
Safetensors
Model size
69B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-fp16

Quantizations
2 models