--- tags: - gguf - GGUF --- # Free Tier Colab This is only for making the intial FP16 gguf file and computing an imatrix.dat Quantizing is too slow on colab due to only having two available cores. # Details [Thanks to mlabonne for the initial code](https://huggingface.co/mlabonne) Default Imatrix is from [kalomaze](https://github.com/kalomaze) RP Imatrix is from [Lewdiculous](https://huggingface.co/Lewdiculous) Test data from [ParasiticRogue](https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light) Host files for a google colab notebook, hoping to make it easier to GGUF models with Imatrix. There are two imatrix datasets, one for general use and one for RP.