GGUF
Not-For-All-Audiences
nsfw
Inference Endpoints
Edit model card

image/png An attempt using BlockMerge_Gradient to get better result.

In addition, LimaRP v3 was used, is it recommanded to read the documentation.

Description

This repo contains quantized files of Amethyst-13B.

Models and loras used

  • Xwin-LM/Xwin-LM-13B-V0.1
  • The-Face-Of-Goonery/Huginn-13b-FP16
  • zattio770/120-Days-of-LORA-v2-13B
  • lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

LimaRP v3 usage and suggested settings

image/png

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

image/png

Special thanks to Sushi.

If you want to support me, you can here.

Downloads last month
12
GGUF
Model size
13B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .