DavidAU/llama-2-16b-nastychat-Q6_K-GGUF

This model was converted to GGUF format from chargoddard/llama-2-16b-nastychat using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo DavidAU/llama-2-16b-nastychat-Q6_K-GGUF --model llama-2-16b-nastychat.Q6_K.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo DavidAU/llama-2-16b-nastychat-Q6_K-GGUF --model llama-2-16b-nastychat.Q6_K.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m llama-2-16b-nastychat.Q6_K.gguf -n 128

Special Thanks:


Special thanks to all the following, and many more...

All the model makers, fine tuners, mergers, and tweakers:

  • Provides the raw "DNA" for almost all my models.
  • Sources of model(s) can be found on the repo pages, especially the "source" repos with link(s) to the model creator(s).

Huggingface [ https://huggingface.co ] :

  • The place to store, merge, and tune models endlessly.
  • THE reason we have an open source community.

LlamaCPP [ https://github.com/ggml-org/llama.cpp ] :

  • The ability to compress and run models on GPU(s), CPU(s) and almost all devices.
  • Imatrix, Quantization, and other tools to tune the quants and the models.
  • Llama-Server : A cli based direct interface to run GGUF models.
  • The only tool I use to quant models.

Quant-Masters: Team Mradermacher, Bartowski, and many others:

  • Quant models day and night for us all to use.
  • They are the lifeblood of open source access.

MergeKit [ https://github.com/arcee-ai/mergekit ] :

  • The universal online/offline tool to merge models together and forge something new.
  • Over 20 methods to almost instantly merge model, pull them apart and put them together again.
  • The tool I have used to create over 1500 models.

Lmstudio [ https://lmstudio.ai/ ] :

  • The go to tool to test and run models in GGUF format.
  • The Tool I use to test/refine and evaluate new models.
  • LMStudio forum on discord; endless info and community for open source.

Text Generation Webui // KolboldCPP // SillyTavern:

Downloads last month
240
GGUF
Model size
16.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including DavidAU/llama-2-16b-nastychat-Q6_K-GGUF