65
Wllama
π¦
Run GGUF directly on your browser!
Run GGUF directly on your browser!
Convert and upload Hugging Face models to MLX format
Create and quantize Hugging Face models
Convert a Hugging Face model to ONNX format
Convert Hugging Face models for local use
Adjust theme and visualize JSON input
Calculate VRAM requirements for running large language models
Experiment with and compare different tokenizers
Duplicate Hugging Face repositories
Convert models to Safetensors and open PRs
Convert PDF to text using OCR
Track datasets to get alerts on new models
Identify dominant colors in an image
Calculate VRAM requirements for LLM models
vector search hf datasets w/ static-retrieval-mrl-en-v1
Edit GGUF metadata on Hugging Face or locally