pzc163 commited on
Commit
bd7f3fd
·
verified ·
1 Parent(s): 98bb3d1

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -109
README.md DELETED
@@ -1,109 +0,0 @@
1
-
2
-
3
- # flux-lora-littletinies
4
-
5
- This is a LoRA derived from [FLUX.1-dev/](https://huggingface.co//workspace/SimpleTuner/FLUX.1-dev/).
6
-
7
-
8
-
9
- The main validation prompt used during training was:
10
-
11
- ```
12
- ethnographic photography of teddy bear at a picnic
13
- ```
14
-
15
- ## Validation settings
16
- - CFG: `7.5`
17
- - CFG Rescale: `0.7`
18
- - Steps: `50`
19
- - Sampler: `None`
20
- - Seed: `42`
21
- - Resolution: `1024`
22
-
23
- Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
24
-
25
- You can find some example images in the following gallery:
26
-
27
-
28
- <Gallery />
29
-
30
- The text encoder **was not** trained.
31
- You may reuse the base model text encoder for inference.
32
-
33
-
34
- ## Training settings
35
-
36
- - Training epochs: 23
37
- - Training steps: 1800
38
- - Learning rate: 0.0001
39
- - Effective batch size: 16
40
- - Micro-batch size: 8
41
- - Gradient accumulation steps: 2
42
- - Number of GPUs: 1
43
- - Prediction type: epsilon
44
- - Rescaled betas zero SNR: False
45
- - Optimizer: AdamW, stochastic bf16
46
- - Precision: Pure BF16
47
- - Xformers: Enabled
48
- - LoRA Rank: 64
49
- - LoRA Alpha: 16
50
- - LoRA Dropout: 0.1
51
- - LoRA initialisation style: default
52
-
53
-
54
- ## Datasets
55
-
56
- ### little-tinies
57
- - Repeats: 18
58
- - Total number of images: 64
59
- - Total number of aspect buckets: 1
60
- - Resolution: 1.0 megapixels
61
- - Cropped: False
62
- - Crop style: None
63
- - Crop aspect: None
64
-
65
-
66
- ## Inference
67
-
68
-
69
- ```python
70
- import torch
71
- from diffusers import DiffusionPipeline
72
-
73
- model_id = '/pzc163/flux-lora-littletinies'
74
- adapter_id = 'flux-lora-littletinies'
75
- pipeline = DiffusionPipeline.from_pretrained(model_id)\pipeline.load_adapter(adapter_id)
76
-
77
- prompt = "ethnographic photography of teddy bear at a picnic"
78
- negative_prompt = "blurry, cropped, ugly"
79
-
80
- pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
81
- image = pipeline(
82
- prompt=prompt,
83
- negative_prompt='blurry, cropped, ugly',
84
- num_inference_steps=50,
85
- generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
86
- width=1152,
87
- height=768,
88
- guidance_scale=7.5,
89
- guidance_rescale=0.7,
90
- ).images[0]
91
- image.save("output.png", format="PNG")
92
- ```
93
-
94
- inference: true
95
- widget:
96
- - text: 'unconditional (blank prompt)'
97
- parameters:
98
- negative_prompt: 'blurry, cropped, ugly'
99
- output:
100
- url: ./image0.png
101
- - text: 'ethnographic photography of teddy bear at a picnic'
102
- parameters:
103
- negative_prompt: 'blurry, cropped, ugly'
104
- output:
105
- url: ./image1.png
106
- - text: 'a robot walking on the street,surrounded by a group of girls'
107
- parameters:
108
- negative_prompt: 'blurry, cropped, ugly'
109
-