UIGEN-X-4B-0729-f32-GGUF

UIGEN-X-4B-0729 is a reasoning-only UI generation model built on the Qwen3-4B architecture by Tesslate, designed to systematically plan, architect, and implement complete user interfaces across modern development stacks with a structured reasoning process including problem analysis, architecture planning, design system definition, and step-by-step implementation. It supports a comprehensive ecosystem covering 26 major categories of frameworks and libraries across 7 platforms, including web (React, Vue, Angular, Svelte, and more), mobile (React Native, Flutter, Ionic), desktop (Electron, Tauri, Flutter Desktop), and Python applications, with extensive support for styling systems, UI component libraries, state management, animation libraries, and icon systems. The model features 21 distinct visual style categories from modern design like Glassmorphism and Material Design to thematic and experimental styles. It integrates dynamic tool calling for asset fetching (e.g., Unsplash images) and content generation, enabling rapid prototyping and production development of complex UI applications.

Optimal inference settings balance creativity and quality, and the model is suited for various use cases including enterprise solutions, educational purposes, and legacy system modernization, requiring moderate hardware and software environments.

Model Files

Model File name Size QuantType
UIGEN-X-4B-0729.BF16.gguf 8.05 GB BF16
UIGEN-X-4B-0729.F16.gguf 8.05 GB F16
UIGEN-X-4B-0729.F32.gguf 16.1 GB F32
UIGEN-X-4B-0729.Q2_K.gguf 1.67 GB Q2_K
UIGEN-X-4B-0729.Q3_K_L.gguf 2.24 GB Q3_K_L
UIGEN-X-4B-0729.Q3_K_M.gguf 2.08 GB Q3_K_M
UIGEN-X-4B-0729.Q3_K_S.gguf 1.89 GB Q3_K_S
UIGEN-X-4B-0729.Q4_K_M.gguf 2.5 GB Q4_K_M
UIGEN-X-4B-0729.Q4_K_S.gguf 2.38 GB Q4_K_S
UIGEN-X-4B-0729.Q5_K_M.gguf 2.89 GB Q5_K_M
UIGEN-X-4B-0729.Q5_K_S.gguf 2.82 GB Q5_K_S
UIGEN-X-4B-0729.Q6_K.gguf 3.31 GB Q6_K
UIGEN-X-4B-0729.Q8_0.gguf 4.28 GB Q8_0

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
480
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/UIGEN-X-4B-0729-f32-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(5)
this model