azaneko commited on
Commit
926fae4
·
verified ·
1 Parent(s): 7ebe361

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - HiDream-ai/HiDream-I1-Full
5
+ pipeline_tag: text-to-image
6
+ ---
7
+
8
+ # HiDream-I1 4Bit Quantized Model
9
+
10
+ This repository is a fork of `HiDream-I1` quantized to 4 bits, allowing the full model to run in less than 16GB of VRAM.
11
+
12
+ The original repository can be found [here](https://github.com/HiDream-ai/HiDream-I1).
13
+
14
+ > `HiDream-I1` is a new open-source image generative foundation model with 17B parameters that achieves state-of-the-art image generation quality within seconds.
15
+
16
+ ![image](https://github.com/user-attachments/assets/d4715fb9-efe1-40c3-bd4e-dfd626492eea)
17
+
18
+ ## Models
19
+
20
+ We offer both the full version and distilled models. The parameter size are the same, so they require the same amount of GPU memory to run. However, the distilled models are faster because of reduced number of inference steps.
21
+
22
+ | Name | Min VRAM | Steps | HuggingFace |
23
+ |-----------------|----------|-------|------------------------------------------------------------------------------------------------------------------------------|
24
+ | HiDream-I1-Full | 16 GB | 50 | 🤗 [Original](https://huggingface.co/HiDream-ai/HiDream-I1-Full) / [NF4](https://huggingface.co/azaneko/HiDream-I1-Full-nf4) |
25
+ | HiDream-I1-Dev | 16 GB | 28 | 🤗 [Original](https://huggingface.co/HiDream-ai/HiDream-I1-Dev) / [NF4](https://huggingface.co/azaneko/HiDream-I1-Dev-nf4) |
26
+ | HiDream-I1-Fast | 16 GB | 16 | 🤗 [Original](https://huggingface.co/HiDream-ai/HiDream-I1-Fast) / [NF4](https://huggingface.co/azaneko/HiDream-I1-Fast-nf4) |
27
+
28
+ ## Hardware Requirements
29
+
30
+ - GPU Architecture: NVIDIA `>= Ampere` (e.g. A100, H100, A40, RTX 3090, RTX 4090)
31
+ - GPU RAM: `>= 16 GB`
32
+ - CPU RAM: `>= 16 GB`
33
+
34
+ ## Quick Start
35
+
36
+ Simply run:
37
+
38
+ ```
39
+ pip install hdi1 --no-build-isolation
40
+ ```
41
+
42
+ > [!NOTE]
43
+ > It's recommended that you start a new python environment for this package to avoid dependency conflicts.
44
+ > To do that, you can use `conda create -n hdi1 python=3.12` and then `conda activate hdi1`.
45
+ > Or you can use `python3 -m venv venv` and then `source venv/bin/activate` on Linux or `venv\Scripts\activate` on Windows.
46
+
47
+ ### Command Line Interface
48
+
49
+ Then you can run the module to generate images:
50
+
51
+ ``` python
52
+ python -m hdi1 "A cat holding a sign that says 'hello world'"
53
+
54
+ # or you can specify the model
55
+ python -m hdi1 "A cat holding a sign that says 'hello world'" -m fast
56
+ ```
57
+
58
+ > [!NOTE]
59
+ > The inference script will try to automatically download `meta-llama/Llama-3.1-8B-Instruct` model files. You need to [agree to the license of the Llama model](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on your HuggingFace account and login using `huggingface-cli login` in order to use the automatic downloader.
60
+
61
+ ### Web Dashboard
62
+
63
+ We also provide a web dashboard for interactive image generation. You can start it by running:
64
+
65
+ ``` python
66
+ python -m hdi1.web
67
+ ```
68
+
69
+ ![image](https://github.com/user-attachments/assets/39b72f8e-6114-4971-ab5f-0aa39ad81963)
70
+
71
+ ## License
72
+
73
+ The code in this repository and the HiDream-I1 models are licensed under [MIT License](./LICENSE).