nvidia/Llama-Nemotron-Post-Training-Dataset Viewer β’ Updated 4 days ago β’ 3.91M β’ 6.21k β’ 413
view post Post 4618 You can now run Llama 4 on your own local device! π¦Run our Dynamic 1.78-bit and 2.71-bit Llama 4 GGUFs: unsloth/Llama-4-Scout-17B-16E-Instruct-GGUFYou can run them on llama.cpp and other inference engines. See our guide here: https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4 See translation 1 reply Β· π€ 14 14 π₯ 10 10 β€οΈ 6 6 π 6 6 + Reply
unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF Image-Text-to-Text β’ Updated 10 days ago β’ 26.9k β’ 16
Running on CPU Upgrade 190 190 Open Portuguese LLM Leaderboard π Track, rank and evaluate open LLMs in Portuguese
view post Post 3345 You can now run DeepSeek-V3-0324 on your own local device!Run our Dynamic 2.42 and 2.71-bit DeepSeek GGUFs: unsloth/DeepSeek-V3-0324-GGUFYou can run them on llama.cpp and other inference engines. See our guide here: https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-v3-0324-locally See translation π₯ 16 16 β€οΈ 8 8 π 4 4 + Reply
unsloth/Llama-4-Maverick-17B-128E-Instruct-FP8 Image-Text-to-Text β’ Updated 13 days ago β’ 719 β’ 6