Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

diffullama - GGUF

Name Quant method Size
diffullama.Q2_K.gguf Q2_K 2.36GB
diffullama.Q3_K_S.gguf Q3_K_S 2.75GB
diffullama.Q3_K.gguf Q3_K 3.07GB
diffullama.Q3_K_M.gguf Q3_K_M 3.07GB
diffullama.Q3_K_L.gguf Q3_K_L 3.35GB
diffullama.IQ4_XS.gguf IQ4_XS 3.4GB
diffullama.Q4_0.gguf Q4_0 3.56GB
diffullama.IQ4_NL.gguf IQ4_NL 3.58GB
diffullama.Q4_K_S.gguf Q4_K_S 3.59GB
diffullama.Q4_K.gguf Q4_K 3.8GB
diffullama.Q4_K_M.gguf Q4_K_M 3.8GB
diffullama.Q4_1.gguf Q4_1 3.95GB
diffullama.Q5_0.gguf Q5_0 4.33GB
diffullama.Q5_K_S.gguf Q5_K_S 4.33GB
diffullama.Q5_K.gguf Q5_K 4.45GB
diffullama.Q5_K_M.gguf Q5_K_M 4.45GB
diffullama.Q5_1.gguf Q5_1 4.72GB
diffullama.Q6_K.gguf Q6_K 5.15GB
diffullama.Q8_0.gguf Q8_0 6.67GB

Original model description:

library_name: transformers base_model: - meta-llama/Llama-2-7b-hf tags: - llama-factory - full - diffusion model-index: - name: diffullama results: [] license: apache-2.0 datasets: - bigcode/starcoderdata - cerebras/SlimPajama-627B

diffullama

This model is a fine-tuned version of [llama2].

Model description

Details and model loading can be seen https://github.com/HKUNLP/DiffuLLaMA.

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
@misc{gong2024scalingdiffusionlanguagemodels,
      title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models}, 
      author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong},
      year={2024},
      eprint={2410.17891},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.17891}, 
}
Downloads last month
221
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .