Upload gguf-imat-llama-3.py

#23
by SolidSnacke - opened

Here is the rewritten file for llama 3. Although gguf-imat and gguf-imat-llama-3 differ only in this line:
(line 111)subprocess.run(["python", convert_script, model_dir, "--outfile", gguf_model_path, "--outtype", "f16", "--vocab-type", "bpe"])

SolidSnacke changed pull request status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment