These can run on a NVIDIA GeForce MX150 at a relatively fast performance, load in FP16 if they aren't already BF16 or FP16
Caio Silva De Oliveira
CaioXapelaum
AI & ML interests
None yet
Organizations
spaces
8
Sleeping
1
HF Inference Models
π»
Missing /v1/models endpoint for serverless inference API
Sleeping
2
Inference Code
π₯
Sleeping
5
Curl Converter
π
Sleeping
4
SDXL Lightning 4Step
π
Running
8
GGUF Playground
π
Display a relocation message for GGUF Playground
Sleeping
Tokenizers4All
π’
models
12
CaioXapelaum/Qwen-2.5-0.5B-Instruct-4bit
Text Generation
β’
0.3B
β’
Updated
β’
10
β’
2
CaioXapelaum/Qwen2.5-1.5B-Instruct-Q4-mlx
Text Generation
β’
0.2B
β’
Updated
β’
16
CaioXapelaum/tiny_starcoder_py-Q8_0-GGUF
Text Generation
β’
0.2B
β’
Updated
β’
7
CaioXapelaum/Qwen2.5-3B-F32-GGUF
Updated
CaioXapelaum/sdxl
Text-to-Image
β’
Updated
β’
42
CaioXapelaum/Qwen2.5-Coder-0.5B-Instruct-Q4-mlx
Text Generation
β’
0.1B
β’
Updated
β’
7
CaioXapelaum/Qwen2.5-Coder-1.5B-Q4_K_M-GGUF
Text Generation
β’
2B
β’
Updated
β’
7
CaioXapelaum/entity-classifier
Image Classification
β’
0.1B
β’
Updated
β’
3
CaioXapelaum/Llama-3.1-Storm-8B-Q5_K_M-GGUF
Text Generation
β’
8B
β’
Updated
β’
7
CaioXapelaum/Orca-2-7b-Patent-Instruct-Llama-2-Q5_K_M-GGUF
7B
β’
Updated
β’
4