File size: 782 Bytes
1c6a519 0e96e22 1c6a519 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
---
base_model:
- nvidia/Llama-3_1-Nemotron-Ultra-253B-v1
pipeline_tag: text-generation
---
Big thanks to ymcki for updating the llama.cpp code to support the 'dummy' layers.
Use the llama.cpp branch from this PR: https://github.com/ggml-org/llama.cpp/pull/12843
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [nvidia/Llama-3_1-Nemotron-Ultra-253B-v1](https://huggingface.co/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|