Surn commited on
Commit
33e124f
·
1 Parent(s): aaedf4f

Revisions V3 - Favicons and minor HTML updates

Browse files
.gitignore CHANGED
@@ -164,4 +164,5 @@ cython_debug/
164
  /src/__pycache__
165
  /utils/__pycache__
166
  /__pycache__
167
- /temp_models
 
 
164
  /src/__pycache__
165
  /utils/__pycache__
166
  /__pycache__
167
+ /temp_models
168
+ /modules/__pycache__
README.md CHANGED
@@ -4,53 +4,64 @@ emoji: 🌖
4
  colorFrom: yellow
5
  colorTo: purple
6
  sdk: gradio
 
7
  sdk_version: 5.16.0
8
  app_file: app.py
9
- pinned: true
10
- license: creativeml-openrail-m
11
- short_description: '[ 250+ Impressive LoRA For Flux ]'
 
 
 
 
 
 
 
 
 
 
 
12
  thumbnail: >-
13
- https://cdn-uploads.huggingface.co/production/uploads/6346595c9e5f0fe83fc60444/9dqqr3iMjoNdXWzDTV42-.png
14
  ---
15
 
16
- # List of Flux Dev LoRA Repositories Used as of Now
 
 
17
 
18
- | No. | Repository Name | Link |
19
- | --- | --------------- | ---- |
20
- | 1 | Canopus-LoRA-Flux-FaceRealism | [Link](https://huggingface.co/prithivMLmods/Canopus-LoRA-Flux-FaceRealism) |
21
- | 2 | softserve_anime | [Link](https://huggingface.co/alvdansen/softserve_anime) |
22
- | 3 | Canopus-LoRA-Flux-Anime | [Link](https://huggingface.co/prithivMLmods/Canopus-LoRA-Flux-Anime) |
23
- | 4 | FLUX.1-dev-LoRA-One-Click-Creative-Template | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-One-Click-Creative-Template) |
24
- | 5 | Canopus-LoRA-Flux-UltraRealism-2.0 | [Link](https://huggingface.co/prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0) |
25
- | 6 | Flux-Game-Assets-LoRA-v2 | [Link](https://huggingface.co/gokaygokay/Flux-Game-Assets-LoRA-v2) |
26
- | 7 | softpasty-flux-dev | [Link](https://huggingface.co/alvdansen/softpasty-flux-dev) |
27
- | 8 | FLUX.1-dev-LoRA-add-details | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-add-details) |
28
- | 9 | frosting_lane_flux | [Link](https://huggingface.co/alvdansen/frosting_lane_flux) |
29
- | 10 | flux-ghibsky-illustration | [Link](https://huggingface.co/aleksa-codes/flux-ghibsky-illustration) |
30
- | 11 | FLUX.1-dev-LoRA-Dark-Fantasy | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Dark-Fantasy) |
31
- | 12 | Flux_1_Dev_LoRA_Paper-Cutout-Style | [Link](https://huggingface.co/Norod78/Flux_1_Dev_LoRA_Paper-Cutout-Style) |
32
- | 13 | mooniverse | [Link](https://huggingface.co/alvdansen/mooniverse) |
33
- | 14 | pola-photo-flux | [Link](https://huggingface.co/alvdansen/pola-photo-flux) |
34
- | 15 | flux-tarot-v1 | [Link](https://huggingface.co/multimodalart/flux-tarot-v1) |
35
- | 16 | Flux-Dev-Real-Anime-LoRA | [Link](https://huggingface.co/prithivMLmods/Flux-Dev-Real-Anime-LoRA) |
36
- | 17 | Flux_Sticker_Lora | [Link](https://huggingface.co/diabolic6045/Flux_Sticker_Lora) |
37
- | 18 | flux-RealismLora | [Link](https://huggingface.co/XLabs-AI/flux-RealismLora) |
38
- | 19 | flux-koda | [Link](https://huggingface.co/alvdansen/flux-koda) |
39
- | 20 | Cine-Aesthetic | [Link](https://huggingface.co/mgwr/Cine-Aesthetic) |
40
- | 21 | flux_cute3D | [Link](https://huggingface.co/SebastianBodza/flux_cute3D) |
41
- | 22 | flux_dreamscape | [Link](https://huggingface.co/bingbangboom/flux_dreamscape) |
42
- | 23 | Canopus-Cute-Kawaii-Flux-LoRA | [Link](https://huggingface.co/prithivMLmods/Canopus-Cute-Kawaii-Flux-LoRA) |
43
- | 24 | Flux-Pastel-Anime | [Link](https://huggingface.co/Raelina/Flux-Pastel-Anime) |
44
- | 25 | FLUX.1-dev-LoRA-Vector-Journey | [Link](https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Vector-Journey) |
45
- | 26 | flux-miniature-worlds | [Link](https://huggingface.co/bingbangboom/flux-miniature-worlds) |
46
- | 27 | bingbangboom_flux_surf | [Link](https://huggingface.co/glif-loradex-trainer/bingbangboom_flux_surf) |
47
- | 28 | Canopus-Snoopy-Charlie-Brown-Flux-LoRA | [Link](https://huggingface.co/prithivMLmods/Canopus-Snoopy-Charlie-Brown-Flux-LoRA) |
48
- | 29 | sonny-anime-fixed | [Link](https://huggingface.co/alvdansen/sonny-anime-fixed) |
49
- | 30 | flux-multi-angle | [Link](https://huggingface.co/davisbro/flux-multi-angle) |
50
 
51
- & More ...
 
 
 
 
52
 
53
- # Space Inspired From
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  | No. | Feature/Component | Description |
56
  | --- | ----------------- | ----------- |
@@ -67,4 +78,7 @@ thumbnail: >-
67
  | 11 | **Space URL** | [flux-lora-the-explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) |
68
 
69
 
 
 
 
70
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
4
  colorFrom: yellow
5
  colorTo: purple
6
  sdk: gradio
7
+ python_version: 3.10.13
8
  sdk_version: 5.16.0
9
  app_file: app.py
10
+ pinned: false
11
+ short_description: Transform Your Images into Mesmerizing Hexagon Grids
12
+ license: apache-2.0
13
+ tags:
14
+ - map maker
15
+ - tabletop
16
+ - hexagon
17
+ - text-to-image
18
+ - image-generation
19
+ - flux
20
+ - depth
21
+ - 3d
22
+ hf_oauth: true
23
+ fullWidth: true
24
  thumbnail: >-
25
+ https://cdn-uploads.huggingface.co/production/uploads/6346595c9e5f0fe83fc60444/s0fQvcoiSBlH36AXpVwPi.png
26
  ---
27
 
28
+ # Hex Game Maker
29
+ ## Description
30
+ Welcome to Hex Game Maker, the ultimate tool for transforming your images into mesmerizing hexagon grid masterpieces! Whether you're a tabletop game enthusiast, a digital artist, or just someone who loves unique patterns, Hex Game Maker has something for you.
31
 
32
+ ### What Can You Do?
33
+ - **Generate Hex Game Maker:** Create stunning hexagon grid overlays on any image with fully customizable parameters.
34
+ - **AI-Powered Image Generation:** Use AI to generate images based on your prompts and apply hexagon grids to them.
35
+ - **Color Exclusion:** Pick and exclude specific colors from your hexagon grid for a cleaner and more refined look.
36
+ - **Interactive Customization:** Adjust hexagon size, border size, rotation, background color, and more in real-time.
37
+ - **Depth and 3D Model Generation:** Generate depth maps and 3D models from your images for enhanced visualization.
38
+ - **Image Filter [Look-Up Table (LUT)] Application:** Apply filters (LUTs) to your images for color grading and enhancement.
39
+ - **Pre-rendered Maps:** Access a library of pre-rendered hexagon maps for quick and easy customization.
40
+ - **Add Margins:** Add customizable margins around your images for a polished finish.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
+ ### Why You'll Love It
43
+ - **Fun and Easy to Use:** With an intuitive interface and real-time previews, creating hexagon grids has never been this fun!
44
+ - **Endless Creativity:** Unleash your creativity with endless customization options and see your images transform in unique ways.
45
+ - **Bee-Inspired Theme:** Enjoy a delightful yellow and purple theme inspired by bees and hexagons! 🐝
46
+ - **Advanced AI Models:** Leverage advanced AI models and LoRA weights for high-quality image generation and customization.
47
 
48
+ ### Get Started
49
+ 1. **Upload or Generate an Image:** Start by uploading your own image or generate one using our AI-powered tool.
50
+ 2. **Customize Your Grid:** Play around with the settings to create the perfect hexagon grid overlay.
51
+ 3. **Download and Share:** Once you're happy with your creation, download it and share it with the world!
52
+
53
+ ### Advanced Features
54
+ - **Generative AI Integration:** Utilize models like `black-forest-labs/FLUX.1-dev` and various LoRA weights for generating unique images.
55
+ - **Pre-rendered Maps:** Access a library of pre-rendered hexagon maps for quick and easy customization.
56
+ - **Image Filter [Look-Up Table (LUT)] Application:** Apply filters (LUTs) to your images for color grading and enhancement.
57
+ - **Depth and 3D Model Generation:** Create depth maps and 3D models from your images for enhanced visualization.
58
+ - **Add Margins:** Customize margins around your images for a polished finish.
59
+
60
+ Join the hive and start creating with Hex Game Maker today!
61
+
62
+
63
+
64
+ # AI Image Generation Space Inspired From
65
 
66
  | No. | Feature/Component | Description |
67
  | --- | ----------------- | ----------- |
 
78
  | 11 | **Space URL** | [flux-lora-the-explorer](https://huggingface.co/spaces/multimodalart/flux-lora-the-explorer) |
79
 
80
 
81
+ ## Contributions
82
+ Thanks to [@Surn](https://huggingface.co/spaces/Surn/beeuty) for adding this gradio theme!
83
+ Special Thanks to https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC as my endeavors to get the ZeroGPUs working had stopped me for 2 weeks with no help available. I was able to get the ZeroGPUs working with the help of this
84
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
app.py CHANGED
@@ -4,13 +4,15 @@ import copy
4
  import time
5
  import random
6
  import logging
 
7
  import numpy as np
8
  from typing import Any, Dict, List, Optional, Union
 
9
 
10
  import torch
11
  from PIL import Image
12
  import gradio as gr
13
-
14
 
15
  from diffusers import (
16
  DiffusionPipeline,
@@ -57,7 +59,11 @@ from modules.constants import (
57
  cards_alternating,
58
  card_colors,
59
  card_colors_alternating,
60
- pre_rendered_maps_paths
 
 
 
 
61
  )
62
  from modules.excluded_colors import (
63
  add_color,
@@ -75,12 +81,14 @@ from modules.misc import (
75
  from modules.lora_details import (
76
  approximate_token_count,
77
  split_prompt_precisely,
 
 
78
  )
79
 
80
  import spaces
81
 
82
  input_image_palette = []
83
- current_prerendered_image = gr.State("./images/images/Beeuty-1.png")
84
  #---if workspace = local or colab---
85
 
86
  # Authenticate with Hugging Face
@@ -131,6 +139,7 @@ def flux_pipe_call_that_returns_an_iterable_of_images(
131
  self,
132
  prompt: Union[str, List[str]] = None,
133
  prompt_2: Optional[Union[str, List[str]]] = None,
 
134
  height: Optional[int] = None,
135
  width: Optional[int] = None,
136
  num_inference_steps: int = 28,
@@ -217,6 +226,7 @@ def flux_pipe_call_that_returns_an_iterable_of_images(
217
  continue
218
 
219
  timestep = t.expand(latents.shape[0]).to(latents.dtype)
 
220
 
221
  noise_pred = self.transformer(
222
  hidden_states=latents,
@@ -284,40 +294,52 @@ class calculateDuration:
284
  else:
285
  print(f"Elapsed time: {self.elapsed_time:.6f} seconds")
286
 
287
- def update_selection(evt: gr.SelectData, width, height):
288
  selected_lora = loras[evt.index]
289
  new_placeholder = f"Type a prompt for {selected_lora['title']}"
 
290
  lora_repo = selected_lora["repo"]
291
  updated_text = f"### Selected: [{lora_repo}](https://huggingface.co/{lora_repo}) ✅"
 
292
  if "aspect" in selected_lora:
293
- if selected_lora["aspect"] == "portrait":
294
- width = 768
295
- height = 1024
296
- elif selected_lora["aspect"] == "landscape":
297
- width = 1024
298
- height = 768
299
- else:
300
- width = 1024
301
- height = 1024
302
  return (
303
  gr.update(placeholder=new_placeholder),
304
  updated_text,
305
  evt.index,
306
  width,
307
  height,
 
 
308
  )
309
 
310
- @spaces.GPU(duration=120)
311
  def generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scale, progress):
312
  pipe.to("cuda")
313
  generator = torch.Generator(device="cuda").manual_seed(seed)
314
- if approximate_token_count(prompt_mash) > 76:
 
 
 
 
 
 
315
  prompt, prompt2 = split_prompt_precisely(prompt_mash)
 
 
 
316
  with calculateDuration("Generating image"):
317
  # Generate image
318
  for img in pipe.flux_pipe_call_that_returns_an_iterable_of_images(
319
  prompt=prompt,
320
- prompt2=prompt2,
321
  num_inference_steps=steps,
322
  guidance_scale=cfg_scale,
323
  width=width,
@@ -329,15 +351,24 @@ def generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scal
329
  ):
330
  yield img
331
 
332
- def generate_image_to_image(prompt_mash, image_input_path, image_strength, steps, cfg_scale, width, height, lora_scale, seed):
333
  generator = torch.Generator(device="cuda").manual_seed(seed)
334
  pipe_i2i.to("cuda")
 
 
 
335
  image_input = load_image(image_input_path)
336
- if approximate_token_count(prompt_mash) > 76:
 
 
 
337
  prompt, prompt2 = split_prompt_precisely(prompt_mash)
 
 
 
338
  final_image = pipe_i2i(
339
  prompt=prompt,
340
- prompt2=prompt2,
341
  image=image_input,
342
  strength=image_strength,
343
  num_inference_steps=steps,
@@ -350,10 +381,21 @@ def generate_image_to_image(prompt_mash, image_input_path, image_strength, steps
350
  ).images[0]
351
  return final_image
352
 
353
- @spaces.GPU(duration=120)
354
- def run_lora(prompt, image_input, image_strength, cfg_scale, steps, selected_index, randomize_seed, seed, width, height, lora_scale, progress=gr.Progress(track_tqdm=True)):
355
  if selected_index is None:
356
  raise gr.Error("You must select a LoRA before proceeding.🧨")
 
 
 
 
 
 
 
 
 
 
 
357
  selected_lora = loras[selected_index]
358
  lora_path = selected_lora["repo"]
359
  trigger_word = selected_lora["trigger_word"]
@@ -389,7 +431,15 @@ def run_lora(prompt, image_input, image_strength, cfg_scale, steps, selected_ind
389
 
390
  if(image_input is not None):
391
  print(f"\nGenerating image to image with seed: {seed}\n")
392
- final_image = generate_image_to_image(prompt_mash, image_input, image_strength, steps, cfg_scale, width, height, lora_scale, seed)
 
 
 
 
 
 
 
 
393
  yield final_image, seed, gr.update(visible=False)
394
  else:
395
  image_generator = generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scale, progress)
@@ -401,7 +451,15 @@ def run_lora(prompt, image_input, image_strength, cfg_scale, steps, selected_ind
401
  final_image = image
402
  progress_bar = f'<div class="progress-container"><div class="progress-bar" style="--current: {step_counter}; --total: {steps};"></div></div>'
403
  yield image, seed, gr.update(value=progress_bar, visible=True)
404
-
 
 
 
 
 
 
 
 
405
  yield final_image, seed, gr.update(value=progress_bar, visible=False)
406
 
407
  def get_huggingface_safetensors(link):
@@ -486,6 +544,7 @@ def add_custom_lora(custom_lora):
486
 
487
  def remove_custom_lora():
488
  return gr.update(visible=False), gr.update(visible=False), gr.update(), "", None, ""
 
489
  def on_prerendered_gallery_selection(event_data: gr.SelectData):
490
  global current_prerendered_image
491
  selected_index = event_data.index
@@ -494,16 +553,27 @@ def on_prerendered_gallery_selection(event_data: gr.SelectData):
494
  current_prerendered_image.value = selected_image
495
  return current_prerendered_image
496
 
 
 
 
 
 
 
 
 
 
 
 
497
  run_lora.zerogpu = True
498
-
499
  title = "Hex Game Maker"
500
- with gr.Blocks(css_paths="style_20250128.css", title=title, theme='Surn/beeuty', delete_cache=(7200, 7200)) as app:
501
  with gr.Row():
502
  gr.Markdown("""
503
  # Hex Game Maker
504
  ## Transform Your Images into Mesmerizing Hexagon Grid Masterpieces! ⬢""", elem_classes="intro")
505
  with gr.Row():
506
- with gr.Accordion("Welcome to Hex Game Maker, the ultimate tool for transforming your images into stunning hexagon grid artworks. Whether you're a tabletop game enthusiast, a digital artist, or someone who loves unique patterns, HexaGrid Creator has something for you.", open=False, elem_classes="intro"):
507
  gr.Markdown ("""
508
 
509
  ## Drop an image into the Input Image and get started!
@@ -541,7 +611,7 @@ with gr.Blocks(css_paths="style_20250128.css", title=title, theme='Surn/beeuty',
541
  - **Depth and 3D Model Generation:** Create depth maps and 3D models from your images for enhanced visualization.
542
  - **Add Margins:** Customize margins around your images for a polished finish.
543
 
544
- Join the hive and start creating with HexaGrid Creator today!
545
 
546
  """, elem_classes="intro")
547
  selected_index = gr.State(None)
@@ -635,58 +705,83 @@ with gr.Blocks(css_paths="style_20250128.css", title=title, theme='Surn/beeuty',
635
  with gr.Row():
636
  with gr.Accordion("Generative AI", open = False):
637
  with gr.Column(scale=3):
638
- prompt = gr.Textbox(label="Prompt", lines=1, placeholder=":/ choose the LoRA and type the prompt ", value="top-down, (rectangular tabletop_map) alien planet map, Battletech_boardgame scifi world with forests, lakes, oceans, continents and snow at the top and bottom, (middle is dark, no_reflections, no_shadows), from directly above. From 100,000 feet looking straight down")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
639
  with gr.Column(scale=1, elem_id="gen_column"):
640
- generate_button = gr.Button("Generate", variant="primary", elem_id="gen_btn")
641
  with gr.Row():
642
  with gr.Column(scale=0):
643
  selected_info = gr.Markdown("")
644
- gallery = gr.Gallery(
645
  [(item["image"], item["title"]) for item in loras],
646
  label="LoRA Styles",
647
  allow_preview=False,
648
  columns=3,
649
- elem_id="gallery",
650
  show_share_button=False
651
  )
652
- with gr.Group():
653
- custom_lora = gr.Textbox(label="Enter Custom LoRA", placeholder="prithivMLmods/Canopus-LoRA-Flux-Anime")
654
- gr.Markdown("[Check the list of FLUX LoRA's](https://huggingface.co/models?other=base_model:adapter:black-forest-labs/FLUX.1-dev)", elem_id="lora_list")
655
- custom_lora_info = gr.HTML(visible=False)
656
- custom_lora_button = gr.Button("Remove custom LoRA", visible=False)
 
657
  with gr.Column(scale=2):
658
- # conditioning_image = gr.Image(label="Conditioning Image",
659
- # type="filepath",
660
- # interactive=True,
661
- # elem_classes="centered solid imgcontainer",
662
- # key="imgConditioning",
663
- # image_mode=None,
664
- # format="PNG",
665
- # show_download_button=True
666
- # )
667
- with gr.Row():
668
- with gr.Column(scale=1):
669
- # Gallery from PRE_RENDERED_IMAGES GOES HERE
670
- prerendered_image_gallery = gr.Gallery(label="Image Gallery", show_label=True, value=build_prerendered_images(pre_rendered_maps_paths), elem_id="gallery", elem_classes="solid", type="filepath", columns=[3], rows=[3], preview=False ,object_fit="contain", height="auto", format="png",allow_preview=False)
671
- with gr.Column(scale=1):
672
- #image_guidance_stength = gr.Slider(label="Image Guidance Strength", minimum=0, maximum=1.0, value=0.25, step=0.01, interactive=True)
673
- replace_input_image_button = gr.Button(
674
- "Replace Input Image",
675
- elem_id="prerendered_replace_input_image_button",
676
- elem_classes="solid"
677
- )
678
- generate_input_image_from_gallery = gr.Button(
679
- "Generate AI Image from Gallery",
680
- elem_id="generate_input_image_from_gallery",
681
- elem_classes="solid"
682
- )
683
  with gr.Accordion("Advanced Settings", open=False):
684
  with gr.Row():
685
- image_strength = gr.Slider(label="Denoise Strength", info="Lower means more image influence", minimum=0.1, maximum=1.0, step=0.01, value=0.75)
686
  with gr.Column():
687
  with gr.Row():
688
- cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, step=0.5, value=3.5)
689
- steps = gr.Slider(label="Steps", minimum=1, maximum=50, step=1, value=28)
690
 
691
  with gr.Row():
692
  negative_prompt_textbox = gr.Textbox(
@@ -698,59 +793,71 @@ with gr.Blocks(css_paths="style_20250128.css", title=title, theme='Surn/beeuty',
698
  # Add Dropdown for sizing of Images, height and width based on selection. Options are 16x9, 16x10, 4x5, 1x1
699
  # The values of height and width are based on common resolutions for each aspect ratio
700
  # Default to 16x9, 1024x576
701
- image_size_ratio = gr.Dropdown(label="Image Size", choices=["16:9", "16:10", "4:5", "4:3", "2:1","3:2","1:1", "9:16", "10:16", "5:4", "3:4","1:2", "2:3"], value="16:9", elem_classes="solid", type="value", scale=0, interactive=True)
702
- width = gr.Slider(label="Width", minimum=256, maximum=1536, step=64, value=576)
703
- height = gr.Slider(label="Height", minimum=256, maximum=2560, step=16, value=1024, interactive=False)
 
704
  image_size_ratio.change(
705
  fn=update_dimensions_on_ratio,
706
- inputs=[image_size_ratio, width],
707
  outputs=[width, height]
708
  )
709
- width.change(
710
- fn=lambda *args: update_dimensions_on_ratio(*args)[1],
711
- inputs=[image_size_ratio, width],
712
- outputs=[height]
713
  )
714
  with gr.Row():
715
  randomize_seed = gr.Checkbox(True, label="Randomize seed")
716
  seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0, randomize=True)
717
- lora_scale = gr.Slider(label="LoRA Scale", minimum=0, maximum=3, step=0.01, value=0.95)
718
  with gr.Row():
719
- gr.HTML(value=versions_html(), visible=True, elem_id="versions")
720
 
721
  # Event Handlers
 
 
 
 
 
 
722
  prerendered_image_gallery.select(
723
  fn=on_prerendered_gallery_selection,
724
  inputs=None,
725
- outputs=[gr.State(current_prerendered_image)], # Update the state with the selected image
726
- show_api=False
727
  )
728
- # replace input image with selected gallery image
729
  replace_input_image_button.click(
730
  lambda: current_prerendered_image.value,
731
  inputs=None,
732
  outputs=[input_image], scroll_to_output=True
733
  )
734
- gallery.select(
735
  update_selection,
736
- inputs=[width, height],
737
- outputs=[prompt, selected_info, selected_index, width, height]
738
  )
739
  custom_lora.input(
740
  add_custom_lora,
741
  inputs=[custom_lora],
742
- outputs=[custom_lora_info, custom_lora_button, gallery, selected_info, selected_index, prompt]
743
  )
744
  custom_lora_button.click(
745
  remove_custom_lora,
746
- outputs=[custom_lora_info, custom_lora_button, gallery, selected_info, selected_index, custom_lora]
747
  )
748
  gr.on(
749
  triggers=[generate_button.click, prompt.submit],
750
  fn=run_lora,
751
- inputs=[prompt, input_image, image_strength, cfg_scale, steps, selected_index, randomize_seed, seed, width, height, lora_scale],
752
  outputs=[input_image, seed, progress_bar]
753
  )
754
 
 
 
 
 
 
755
  app.queue()
756
  app.launch(allowed_paths=["assets","/","./assets","images","./images", "./images/prerendered"], favicon_path="./assets/favicon.ico", max_file_size="10mb")
 
4
  import time
5
  import random
6
  import logging
7
+ from gradio.blocks import postprocess_update_dict
8
  import numpy as np
9
  from typing import Any, Dict, List, Optional, Union
10
+ import logging
11
 
12
  import torch
13
  from PIL import Image
14
  import gradio as gr
15
+ from tempfile import NamedTemporaryFile
16
 
17
  from diffusers import (
18
  DiffusionPipeline,
 
59
  cards_alternating,
60
  card_colors,
61
  card_colors_alternating,
62
+ pre_rendered_maps_paths,
63
+ PROMPTS,
64
+ NEGATIVE_PROMPTS,
65
+ TARGET_SIZE,
66
+ temp_files
67
  )
68
  from modules.excluded_colors import (
69
  add_color,
 
81
  from modules.lora_details import (
82
  approximate_token_count,
83
  split_prompt_precisely,
84
+ upd_prompt_notes_by_index,
85
+ get_trigger_words_by_index
86
  )
87
 
88
  import spaces
89
 
90
  input_image_palette = []
91
+ current_prerendered_image = gr.State("./images/Beeuty-1.png")
92
  #---if workspace = local or colab---
93
 
94
  # Authenticate with Hugging Face
 
139
  self,
140
  prompt: Union[str, List[str]] = None,
141
  prompt_2: Optional[Union[str, List[str]]] = None,
142
+ negative_prompt: Optional[Union[str, List[str]]] = None,
143
  height: Optional[int] = None,
144
  width: Optional[int] = None,
145
  num_inference_steps: int = 28,
 
226
  continue
227
 
228
  timestep = t.expand(latents.shape[0]).to(latents.dtype)
229
+ print(f"Step {i + 1}/{num_inference_steps} - Timestep: {timestep.item()}\n")
230
 
231
  noise_pred = self.transformer(
232
  hidden_states=latents,
 
294
  else:
295
  print(f"Elapsed time: {self.elapsed_time:.6f} seconds")
296
 
297
+ def update_selection(evt: gr.SelectData, width, height, aspect_ratio):
298
  selected_lora = loras[evt.index]
299
  new_placeholder = f"Type a prompt for {selected_lora['title']}"
300
+ new_aspect_ratio = aspect_ratio
301
  lora_repo = selected_lora["repo"]
302
  updated_text = f"### Selected: [{lora_repo}](https://huggingface.co/{lora_repo}) ✅"
303
+ # aspect will now use ratios if implemented, like 16:9, 4:3, 1:1, etc.
304
  if "aspect" in selected_lora:
305
+ try:
306
+ new_aspect_ratio = selected_lora["aspect"]
307
+ width, height = update_dimensions_on_ratio(new_aspect_ratio, height)
308
+ except Exception as e:
309
+ print(f"\nError in update selection aspect ratios:{e}\nSkipping")
310
+ new_aspect_ratio = aspect_ratio
311
+ width = width
312
+ height = height
 
313
  return (
314
  gr.update(placeholder=new_placeholder),
315
  updated_text,
316
  evt.index,
317
  width,
318
  height,
319
+ new_aspect_ratio,
320
+ upd_prompt_notes_by_index(evt.index)
321
  )
322
 
323
+ @spaces.GPU(duration=120,progress=gr.Progress(track_tqdm=True))
324
  def generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scale, progress):
325
  pipe.to("cuda")
326
  generator = torch.Generator(device="cuda").manual_seed(seed)
327
+ flash_attention_enabled = torch.backends.cuda.flash_sdp_enabled()
328
+ if flash_attention_enabled:
329
+ pipe.attn_implementation="flash_attention_2"
330
+ print(f"\nGenerating image with prompt: {prompt_mash}\n")
331
+ approx_tokens= approximate_token_count(prompt_mash)
332
+ if approx_tokens > 76:
333
+ print(f"\nSplitting prompt due to length: {approx_tokens}\n")
334
  prompt, prompt2 = split_prompt_precisely(prompt_mash)
335
+ else:
336
+ prompt = prompt_mash
337
+ prompt2 = None
338
  with calculateDuration("Generating image"):
339
  # Generate image
340
  for img in pipe.flux_pipe_call_that_returns_an_iterable_of_images(
341
  prompt=prompt,
342
+ prompt_2=prompt2,
343
  num_inference_steps=steps,
344
  guidance_scale=cfg_scale,
345
  width=width,
 
351
  ):
352
  yield img
353
 
354
+ def generate_image_to_image(prompt_mash, image_input_path, image_strength, steps, cfg_scale, width, height, lora_scale, seed, progress):
355
  generator = torch.Generator(device="cuda").manual_seed(seed)
356
  pipe_i2i.to("cuda")
357
+ flash_attention_enabled = torch.backends.cuda.flash_sdp_enabled()
358
+ if flash_attention_enabled:
359
+ pipe_i2i.attn_implementation="flash_attention_2"
360
  image_input = load_image(image_input_path)
361
+ print(f"\nGenerating image with prompt: {prompt_mash} and {image_input_path}\n")
362
+ approx_tokens= approximate_token_count(prompt_mash)
363
+ if approx_tokens > 76:
364
+ print(f"\nSplitting prompt due to length: {approx_tokens}\n")
365
  prompt, prompt2 = split_prompt_precisely(prompt_mash)
366
+ else:
367
+ prompt = prompt_mash
368
+ prompt2 = None
369
  final_image = pipe_i2i(
370
  prompt=prompt,
371
+ prompt_2=prompt2,
372
  image=image_input,
373
  strength=image_strength,
374
  num_inference_steps=steps,
 
381
  ).images[0]
382
  return final_image
383
 
384
+ @spaces.GPU(duration=140)
385
+ def run_lora(prompt, image_input, image_strength, cfg_scale, steps, selected_index, randomize_seed, seed, width, height, lora_scale, enlarge, use_conditioned_image=False, progress=gr.Progress(track_tqdm=True)):
386
  if selected_index is None:
387
  raise gr.Error("You must select a LoRA before proceeding.🧨")
388
+ print(f"input Image: {image_input}\n")
389
+ # handle selecting a conditioned image from the gallery
390
+ global current_prerendered_image
391
+ conditioned_image=None
392
+ if use_conditioned_image:
393
+ print(f"Conditioned path: {current_prerendered_image.value}.. converting to RGB\n")
394
+ # ensure the conditioned image is an image and not a string, cannot use RGBA
395
+ if isinstance(current_prerendered_image.value, str):
396
+ conditioned_image = open_image(current_prerendered_image.value).convert("RGB")
397
+ image_input = crop_and_resize_image(conditioned_image, width, height)
398
+ print(f"Conditioned Image: {image_input.size}.. converted to RGB and resized\n")
399
  selected_lora = loras[selected_index]
400
  lora_path = selected_lora["repo"]
401
  trigger_word = selected_lora["trigger_word"]
 
431
 
432
  if(image_input is not None):
433
  print(f"\nGenerating image to image with seed: {seed}\n")
434
+ final_image = generate_image_to_image(prompt_mash, image_input, image_strength, steps, cfg_scale, width, height, lora_scale, seed, progress)
435
+ if enlarge:
436
+ upscaled_image = upscale_image(final_image, max(1.0,min((TARGET_SIZE[0]/width),(TARGET_SIZE[1]/height))))
437
+ # Save the upscaled image to a temporary file
438
+ with NamedTemporaryFile(delete=False, suffix=".png") as tmp_upscaled:
439
+ upscaled_image.save(tmp_upscaled.name, format="PNG")
440
+ temp_files.append(tmp_upscaled.name)
441
+ print(f"Upscaled image saved to {tmp_upscaled.name}")
442
+ final_image = tmp_upscaled.name
443
  yield final_image, seed, gr.update(visible=False)
444
  else:
445
  image_generator = generate_image(prompt_mash, steps, seed, cfg_scale, width, height, lora_scale, progress)
 
451
  final_image = image
452
  progress_bar = f'<div class="progress-container"><div class="progress-bar" style="--current: {step_counter}; --total: {steps};"></div></div>'
453
  yield image, seed, gr.update(value=progress_bar, visible=True)
454
+
455
+ if enlarge:
456
+ upscaled_image = upscale_image(final_image, max(1.0,min((TARGET_SIZE[0]/width),(TARGET_SIZE[1]/height))))
457
+ # Save the upscaled image to a temporary file
458
+ with NamedTemporaryFile(delete=False, suffix=".png") as tmp_upscaled:
459
+ upscaled_image.save(tmp_upscaled.name, format="PNG")
460
+ temp_files.append(tmp_upscaled.name)
461
+ print(f"Upscaled image saved to {tmp_upscaled.name}")
462
+ final_image = tmp_upscaled.name
463
  yield final_image, seed, gr.update(value=progress_bar, visible=False)
464
 
465
  def get_huggingface_safetensors(link):
 
544
 
545
  def remove_custom_lora():
546
  return gr.update(visible=False), gr.update(visible=False), gr.update(), "", None, ""
547
+
548
  def on_prerendered_gallery_selection(event_data: gr.SelectData):
549
  global current_prerendered_image
550
  selected_index = event_data.index
 
553
  current_prerendered_image.value = selected_image
554
  return current_prerendered_image
555
 
556
+ def update_prompt_visibility(map_option):
557
+ is_visible = (map_option == "Prompt")
558
+ return (
559
+ gr.update(visible=is_visible),
560
+ gr.update(visible=is_visible),
561
+ gr.update(visible=is_visible)
562
+ )
563
+
564
+ @spaces.GPU()
565
+ def getVersions():
566
+ return versions_html()
567
  run_lora.zerogpu = True
568
+ gr.set_static_paths(paths=["images/","images/images","images/prerendered","LUT/","fonts/", "assets/"])
569
  title = "Hex Game Maker"
570
+ with gr.Blocks(css_paths="style_20250128.css", title=title, theme='Surn/beeuty', delete_cache=(43200, 43200), head_paths="head.htm") as app:
571
  with gr.Row():
572
  gr.Markdown("""
573
  # Hex Game Maker
574
  ## Transform Your Images into Mesmerizing Hexagon Grid Masterpieces! ⬢""", elem_classes="intro")
575
  with gr.Row():
576
+ with gr.Accordion("Welcome to Hex Game Maker, the ultimate tool for transforming your images into stunning hexagon grid artworks. Whether you're a tabletop game enthusiast, a digital artist, or someone who loves unique patterns, Hex Game Maker has something for you.", open=False, elem_classes="intro"):
577
  gr.Markdown ("""
578
 
579
  ## Drop an image into the Input Image and get started!
 
611
  - **Depth and 3D Model Generation:** Create depth maps and 3D models from your images for enhanced visualization.
612
  - **Add Margins:** Customize margins around your images for a polished finish.
613
 
614
+ Join the hive and start creating with Hex Game Maker today!
615
 
616
  """, elem_classes="intro")
617
  selected_index = gr.State(None)
 
705
  with gr.Row():
706
  with gr.Accordion("Generative AI", open = False):
707
  with gr.Column(scale=3):
708
+ map_options = gr.Dropdown(
709
+ label="Map Options",
710
+ choices=list(PROMPTS.keys()),
711
+ value="Alien Landscape",
712
+ elem_classes="solid",
713
+ scale=0
714
+ )
715
+ prompt = gr.Textbox(
716
+ label="Prompt",
717
+ visible=False,
718
+ elem_classes="solid",
719
+ value="top-down, (rectangular tabletop_map) alien planet map, Battletech_boardgame scifi world with forests, lakes, oceans, continents and snow at the top and bottom, (middle is dark, no_reflections, no_shadows), from directly above. From 100,000 feet looking straight down",
720
+ lines=4
721
+ )
722
+ negative_prompt_textbox = gr.Textbox(
723
+ label="Negative Prompt",
724
+ visible=False,
725
+ elem_classes="solid",
726
+ value="Earth, low quality, bad anatomy, blurry, cropped, worst quality, shadows, people, humans, reflections, shadows, realistic map of the Earth, isometric, text"
727
+ )
728
+ prompt_notes_label = gr.Label(
729
+ "Choose a LoRa style or add an image. YOU MUST CLEAR THE IMAGE TO START OVER ",
730
+ elem_classes="solid centered small",
731
+ show_label=False,
732
+ visible=False
733
+ )
734
+ # Keep the change event to maintain functionality
735
+ map_options.change(
736
+ fn=update_prompt_visibility,
737
+ inputs=[map_options],
738
+ outputs=[prompt, negative_prompt_textbox, prompt_notes_label]
739
+ )
740
+
741
  with gr.Column(scale=1, elem_id="gen_column"):
742
+ generate_button = gr.Button("Generate From Promp and LoRa Style", variant="primary", elem_id="gen_btn")
743
  with gr.Row():
744
  with gr.Column(scale=0):
745
  selected_info = gr.Markdown("")
746
+ lora_gallery = gr.Gallery(
747
  [(item["image"], item["title"]) for item in loras],
748
  label="LoRA Styles",
749
  allow_preview=False,
750
  columns=3,
751
+ elem_id="lora_gallery",
752
  show_share_button=False
753
  )
754
+ with gr.Accordion("Custom LoRA", open=False):
755
+ with gr.Group():
756
+ custom_lora = gr.Textbox(label="Enter Custom LoRA", placeholder="prithivMLmods/Canopus-LoRA-Flux-Anime")
757
+ gr.Markdown("[Check the list of FLUX LoRA's](https://huggingface.co/models?other=base_model:adapter:black-forest-labs/FLUX.1-dev)", elem_id="lora_list")
758
+ custom_lora_info = gr.HTML(visible=False)
759
+ custom_lora_button = gr.Button("Remove custom LoRA", visible=False)
760
  with gr.Column(scale=2):
761
+ with gr.Accordion("Template Image Styles", open = False):
762
+ with gr.Row():
763
+ with gr.Column(scale=1):
764
+ # Gallery from PRE_RENDERED_IMAGES GOES HERE
765
+ prerendered_image_gallery = gr.Gallery(label="Image Gallery", show_label=True, value=build_prerendered_images(pre_rendered_maps_paths), elem_id="gallery", elem_classes="solid", type="filepath", columns=[3], rows=[3], preview=False ,object_fit="contain", height="auto", format="png",allow_preview=False)
766
+ with gr.Column(scale=1):
767
+ #image_guidance_stength = gr.Slider(label="Image Guidance Strength", minimum=0, maximum=1.0, value=0.25, step=0.01, interactive=True)
768
+ replace_input_image_button = gr.Button(
769
+ "Replace Input Image",
770
+ elem_id="prerendered_replace_input_image_button",
771
+ elem_classes="solid"
772
+ )
773
+ generate_input_image_from_gallery = gr.Button(
774
+ "Generate AI Image from Gallery",
775
+ elem_id="generate_input_image_from_gallery",
776
+ elem_classes="solid"
777
+ )
 
 
 
 
 
 
 
 
778
  with gr.Accordion("Advanced Settings", open=False):
779
  with gr.Row():
780
+ image_strength = gr.Slider(label="Image Guidance Strength (prompt percentage)", info="Lower means more image influence", minimum=0.1, maximum=1.0, step=0.01, value=0.8)
781
  with gr.Column():
782
  with gr.Row():
783
+ cfg_scale = gr.Slider(label="CFG Scale", minimum=1, maximum=20, step=0.5, value=4.5)
784
+ steps = gr.Slider(label="Steps", minimum=1, maximum=50, step=1, value=30)
785
 
786
  with gr.Row():
787
  negative_prompt_textbox = gr.Textbox(
 
793
  # Add Dropdown for sizing of Images, height and width based on selection. Options are 16x9, 16x10, 4x5, 1x1
794
  # The values of height and width are based on common resolutions for each aspect ratio
795
  # Default to 16x9, 1024x576
796
+ image_size_ratio = gr.Dropdown(label="Image Aspect Ratio", choices=["16:9", "16:10", "4:5", "4:3", "2:1","3:2","1:1", "9:16", "10:16", "5:4", "3:4","1:2", "2:3"], value="16:9", elem_classes="solid", type="value", scale=0, interactive=True)
797
+ width = gr.Slider(label="Width", minimum=256, maximum=2560, step=16, value=1024, interactive=False)
798
+ height = gr.Slider(label="Height", minimum=256, maximum=1536, step=64, value=512)
799
+ enlarge_to_default = gr.Checkbox(label="Auto Enlarge to Default Size", value=False)
800
  image_size_ratio.change(
801
  fn=update_dimensions_on_ratio,
802
+ inputs=[image_size_ratio, height],
803
  outputs=[width, height]
804
  )
805
+ height.change(
806
+ fn=lambda *args: update_dimensions_on_ratio(*args)[0],
807
+ inputs=[image_size_ratio, height],
808
+ outputs=[width]
809
  )
810
  with gr.Row():
811
  randomize_seed = gr.Checkbox(True, label="Randomize seed")
812
  seed = gr.Slider(label="Seed", minimum=0, maximum=MAX_SEED, step=1, value=0, randomize=True)
813
+ lora_scale = gr.Slider(label="LoRA Scale", minimum=0, maximum=3, step=0.01, value=1.01)
814
  with gr.Row():
815
+ gr.HTML(value=getVersions(), visible=True, elem_id="versions")
816
 
817
  # Event Handlers
818
+ #use conditioned_image as the input_image for generate_input_image_click
819
+ generate_input_image_from_gallery.click(
820
+ fn=run_lora,
821
+ inputs=[prompt, input_image, image_strength, cfg_scale, steps, selected_index, randomize_seed, seed, width, height, lora_scale, enlarge_to_default, gr.State(True)],
822
+ outputs=[input_image, seed, progress_bar], scroll_to_output=True
823
+ )
824
  prerendered_image_gallery.select(
825
  fn=on_prerendered_gallery_selection,
826
  inputs=None,
827
+ outputs=gr.State(current_prerendered_image), # Update the state with the selected image
828
+ show_api=False, scroll_to_output=True
829
  )
830
+ # replace input image with selected prerendered image gallery selection
831
  replace_input_image_button.click(
832
  lambda: current_prerendered_image.value,
833
  inputs=None,
834
  outputs=[input_image], scroll_to_output=True
835
  )
836
+ lora_gallery.select(
837
  update_selection,
838
+ inputs=[width, height, image_size_ratio],
839
+ outputs=[prompt, selected_info, selected_index, width, height, image_size_ratio, prompt_notes_label]
840
  )
841
  custom_lora.input(
842
  add_custom_lora,
843
  inputs=[custom_lora],
844
+ outputs=[custom_lora_info, custom_lora_button, lora_gallery, selected_info, selected_index, prompt]
845
  )
846
  custom_lora_button.click(
847
  remove_custom_lora,
848
+ outputs=[custom_lora_info, custom_lora_button, lora_gallery, selected_info, selected_index, custom_lora]
849
  )
850
  gr.on(
851
  triggers=[generate_button.click, prompt.submit],
852
  fn=run_lora,
853
+ inputs=[prompt, input_image, image_strength, cfg_scale, steps, selected_index, randomize_seed, seed, width, height, lora_scale, enlarge_to_default, gr.State(False)],
854
  outputs=[input_image, seed, progress_bar]
855
  )
856
 
857
+ logging.basicConfig(
858
+ format="[%(levelname)s] %(asctime)s %(message)s", level=logging.INFO
859
+ )
860
+ logging.info("Environment Variables: %s" % os.environ)
861
+
862
  app.queue()
863
  app.launch(allowed_paths=["assets","/","./assets","images","./images", "./images/prerendered"], favicon_path="./assets/favicon.ico", max_file_size="10mb")
assets/android-chrome-192x192.png ADDED

Git LFS Details

  • SHA256: e62e2498c082e0844e492ac43a5d73ec9cb971c73d553eaa791a0007b6b5568e
  • Pointer size: 130 Bytes
  • Size of remote file: 85.5 kB
assets/android-chrome-512x512.png ADDED

Git LFS Details

  • SHA256: d5517687efac3a17cbcbdb9105a1fc212f59dd2550436e8df1352220b19735b0
  • Pointer size: 131 Bytes
  • Size of remote file: 465 kB
assets/apple-touch-icon.png ADDED

Git LFS Details

  • SHA256: 560726a74aa9d40bc0d0472b89dd2eb990e158382986c13c05df59edde83a2f5
  • Pointer size: 130 Bytes
  • Size of remote file: 76 kB
assets/favicon-128x128.png ADDED

Git LFS Details

  • SHA256: d5517687efac3a17cbcbdb9105a1fc212f59dd2550436e8df1352220b19735b0
  • Pointer size: 131 Bytes
  • Size of remote file: 465 kB
assets/favicon-16x16.png ADDED

Git LFS Details

  • SHA256: d250db248ca2a6d938bef4b442424dd044dcfb0bb2a56b7e2f772574814c026f
  • Pointer size: 128 Bytes
  • Size of remote file: 913 Bytes
assets/favicon-32x32.png ADDED

Git LFS Details

  • SHA256: 349da3871d0f2899d0565381b70dd3b7e57686b3284e9b967f22f556a76758ff
  • Pointer size: 129 Bytes
  • Size of remote file: 2.89 kB
assets/site.webmanifest ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "Hex Game Maker",
3
+ "short_name": "HexGameMaker",
4
+ "icons": [
5
+ {
6
+ "src": "gradio_api/file=./assets/android-chrome-192x192.png",
7
+ "sizes": "192x192",
8
+ "type": "image/png"
9
+ },
10
+ {
11
+ "src": "gradio_api/file=./assets/android-chrome-512x512.png",
12
+ "sizes": "512x512",
13
+ "type": "image/png"
14
+ }
15
+ ],
16
+ "theme_color": "#ff00ff",
17
+ "background_color": "#ffffff",
18
+ "display": "standalone"
19
+ }
head.htm ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ <link rel="apple-touch-icon" sizes="180x180" href="gradio_api/file=./assets/apple-touch-icon.png">
2
+ <link rel="icon" type="image/png" sizes="32x32" href="gradio_api/file=./assets/favicon-32x32.png">
3
+ <link rel="icon" type="image/png" sizes="16x16" href="gradio_api/file=./assets/favicon-16x16.png">
4
+ <link rel="manifest" href="gradio_api/file=./assets/site.webmanifest">
modules/constants.py CHANGED
@@ -40,6 +40,7 @@ if not HF_API_TOKEN:
40
 
41
  default_lut_example_img = "./LUT/daisy.jpg"
42
  MAX_SEED = np.iinfo(np.int32).max
 
43
 
44
  PROMPTS = {
45
  "BorderBlack": "Top-down view of a hexagon-based alien map with black borders. Features rivers, mountains, volcanoes, and snow at top and bottom. Colors: light blue, green, tan, brown. No reflections or shadows. Partial hexes on edges are black.",
@@ -514,6 +515,18 @@ LORAS = [
514
  "trigger_word": "A TOK composite photo of a person posing at different angles"
515
  },
516
  #31
 
 
 
 
 
 
 
 
 
 
 
 
517
  {
518
  "image": "https://huggingface.co/Borcherding/FLUX.1-dev-LoRA-FractalLand-v0.1/resolve/main/images/example_e2zoqwftv.png",
519
  "title" : "Fractal Land",
@@ -535,7 +548,7 @@ LORAS = [
535
  "weights": "disney_lora.safetensors",
536
  "trigger_word": "disney style",
537
  "trigger_position" : "append",
538
- "notes": "You should use ',disney style' as trigger words at the end. ",
539
  "parameters" :{
540
  "num_inference_steps": "30",
541
  }
@@ -547,7 +560,7 @@ LORAS = [
547
  "weights": "anime_lora.safetensors",
548
  "trigger_word": "anime",
549
  "trigger_position" : "append",
550
- "notes": "You should use ',anime' as trigger words at the end. ",
551
  "parameters" :{
552
  "num_inference_steps": "30",
553
  }
@@ -559,7 +572,7 @@ LORAS = [
559
  "weights": "scenery_lora.safetensors",
560
  "trigger_word": "scenery style",
561
  "trigger_position" : "append",
562
- "notes": "You should use ',scenery style' as trigger words at the end. ",
563
  "parameters" :{
564
  "num_inference_steps": "30",
565
  }
@@ -627,7 +640,14 @@ LORAS = [
627
  "image": "https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Logo-Design/resolve/main/images/73e7db6a33550d05836ce285549de60075d05373c7b0660d631dac33.jpg",
628
  "title": "Logo Design",
629
  "repo": "Shakker-Labs/FLUX.1-dev-LoRA-Logo-Design",
630
- "trigger_word": "wablogo, logo, Minimalist"
 
 
 
 
 
 
 
631
  },
632
  #41
633
  #43
 
40
 
41
  default_lut_example_img = "./LUT/daisy.jpg"
42
  MAX_SEED = np.iinfo(np.int32).max
43
+ TARGET_SIZE = (2688,1536)
44
 
45
  PROMPTS = {
46
  "BorderBlack": "Top-down view of a hexagon-based alien map with black borders. Features rivers, mountains, volcanoes, and snow at top and bottom. Colors: light blue, green, tan, brown. No reflections or shadows. Partial hexes on edges are black.",
 
515
  "trigger_word": "A TOK composite photo of a person posing at different angles"
516
  },
517
  #31
518
+ {
519
+ "image": "https://huggingface.co/Cossale/Frames2-Flex.1/resolve/main/samples/1737567472380__000005000_2.jpg",
520
+ "title": "Backdrops v2",
521
+ "weights": "backdrops_v2.safetensors",
522
+ "adapter_name": "backdrops_v2",
523
+ "repo": "Cossale/Frames2-Flex.1",
524
+ "trigger_word": "FRM$",
525
+ "notes": "You should use FRM$ as trigger words.",
526
+ "parameters" :{
527
+ "num_inference_steps": "50"
528
+ }
529
+ },
530
  {
531
  "image": "https://huggingface.co/Borcherding/FLUX.1-dev-LoRA-FractalLand-v0.1/resolve/main/images/example_e2zoqwftv.png",
532
  "title" : "Fractal Land",
 
548
  "weights": "disney_lora.safetensors",
549
  "trigger_word": "disney style",
550
  "trigger_position" : "append",
551
+ "notes": "Use ',disney style' as trigger words at the end of prompt. ",
552
  "parameters" :{
553
  "num_inference_steps": "30",
554
  }
 
560
  "weights": "anime_lora.safetensors",
561
  "trigger_word": "anime",
562
  "trigger_position" : "append",
563
+ "notes": "Use ',anime' as trigger words at the end. ",
564
  "parameters" :{
565
  "num_inference_steps": "30",
566
  }
 
572
  "weights": "scenery_lora.safetensors",
573
  "trigger_word": "scenery style",
574
  "trigger_position" : "append",
575
+ "notes": "Use ',scenery style' as trigger words at the end. ",
576
  "parameters" :{
577
  "num_inference_steps": "30",
578
  }
 
640
  "image": "https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-Logo-Design/resolve/main/images/73e7db6a33550d05836ce285549de60075d05373c7b0660d631dac33.jpg",
641
  "title": "Logo Design",
642
  "repo": "Shakker-Labs/FLUX.1-dev-LoRA-Logo-Design",
643
+ "trigger_word": "wablogo, logo, Minimalist",
644
+ "notes": "You should use wablogo, logo, Minimalist as trigger words..",
645
+ "pipe" :{
646
+ "fuse_lora": {"lora_scale":0.8}
647
+ },
648
+ "parameters" :{
649
+ "num_inference_steps": "38"
650
+ }
651
  },
652
  #41
653
  #43
modules/lora_details.py CHANGED
@@ -1,7 +1,43 @@
1
  # modules/lora_details.py
2
 
3
  import gradio as gr
4
- from modules.constants import LORA_DETAILS
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
  def upd_prompt_notes(model_textbox_value):
7
  """
 
1
  # modules/lora_details.py
2
 
3
  import gradio as gr
4
+ from modules.constants import LORA_DETAILS, LORAS
5
+ def upd_prompt_notes_by_index(lora_index):
6
+ """
7
+ Updates the prompt_notes_label with the notes from LORAS based on index.
8
+
9
+ Args:
10
+ lora_index (int): The index of the selected LoRA model.
11
+
12
+ Returns:
13
+ gr.update: Updated Gradio label component with the notes.
14
+ """
15
+ try:
16
+ if LORAS[lora_index]:
17
+ notes = LORAS[lora_index].get('notes', None)
18
+ if notes is None:
19
+ trigger_word = LORAS[lora_index].get('trigger_word', "")
20
+ trigger_position = LORAS[lora_index].get('trigger_position', "")
21
+ notes = f"{trigger_position} '{trigger_word}' in prompt"
22
+ except IndexError:
23
+ notes = "Enter Prompt description of your image, \nusing models without LoRa may take a 30 minutes."
24
+ return gr.update(value=notes)
25
+
26
+ def get_trigger_words_by_index(lora_index):
27
+ """
28
+ Retrieves the trigger words from LORAS for the specified index.
29
+
30
+ Args:
31
+ lora_index (int): The index of the selected LoRA model.
32
+
33
+ Returns:
34
+ str: The trigger words associated with the model, or an empty string if not found.
35
+ """
36
+ try:
37
+ trigger_words = LORAS[lora_index].get('trigger_word', "")
38
+ except IndexError:
39
+ trigger_words = ""
40
+ return trigger_words
41
 
42
  def upd_prompt_notes(model_textbox_value):
43
  """
modules/misc.py CHANGED
@@ -56,7 +56,7 @@ def convert_ratio_to_dimensions(ratio, height=512, rotate90=False):
56
  Returns:
57
  tuple: A tuple containing the calculated (width, height) in pixels, both divisible by 16.
58
  """
59
- base_height = 512
60
  # Scale the height based on the provided height parameter
61
  # Ensure the height is at least base_height
62
  scaled_height = max(height, base_height)
@@ -70,13 +70,13 @@ def convert_ratio_to_dimensions(ratio, height=512, rotate90=False):
70
  adjusted_width, adjusted_height = adjusted_height, adjusted_width
71
  return adjusted_width, adjusted_height
72
 
73
- def update_dimensions_on_ratio(image_format, width):
74
- # Convert image_format from a string split by ":" into two numbers
75
- width_ratio, height_ratio = map(int, image_format.split(":"))
76
  aspect_ratio = width_ratio / height_ratio
77
 
78
- # Compute new width and height based on the aspect ratio and base width
79
- new_width, new_height = convert_ratio_to_dimensions(aspect_ratio, width)
80
  return new_width, new_height
81
 
82
  # def install_torch():
 
56
  Returns:
57
  tuple: A tuple containing the calculated (width, height) in pixels, both divisible by 16.
58
  """
59
+ base_height = 256
60
  # Scale the height based on the provided height parameter
61
  # Ensure the height is at least base_height
62
  scaled_height = max(height, base_height)
 
70
  adjusted_width, adjusted_height = adjusted_height, adjusted_width
71
  return adjusted_width, adjusted_height
72
 
73
+ def update_dimensions_on_ratio(aspect_ratio_str, height):
74
+ # Convert aspect_ratio from a string split by ":" into two numbers
75
+ width_ratio, height_ratio = map(int, aspect_ratio_str.split(":"))
76
  aspect_ratio = width_ratio / height_ratio
77
 
78
+ # Compute new width and height based on the aspect ratio and base height
79
+ new_width, new_height = convert_ratio_to_dimensions(aspect_ratio, height)
80
  return new_width, new_height
81
 
82
  # def install_torch():
web-ui.bat ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ set NVIDIA_VISIBLE_DEVICES=0
2
+ set CUDA_VISIBLE_DEVICES=0
3
+ set CUDA_MODULE_LOADING=LAZY
4
+ set PYTORCH_CUDA_ALLOC_CONF= max_split_size_mb:256
5
+ set XFORMERS_FORCE_DISABLE_TRITON=1
6
+ set TF_ENABLE_ONEDNN_OPTS=0
7
+ set USE_FLASH_ATTENTION=1
8
+ set GIT_LFS_ENABLED=true
9
+ set TEMP = e:\\TMP
10
+ set TMPDIR = e:\\TMP
11
+ set XDG_CACHE_HOME = E:\\cache
12
+ python -m app
13
+ pause