Surn commited on
Commit
3e03a29
·
1 Parent(s): ae76ae4

UI, 3D major updates

Browse files
README.md CHANGED
@@ -5,7 +5,7 @@ colorFrom: yellow
5
  colorTo: purple
6
  sdk: gradio
7
  python_version: 3.10.13
8
- sdk_version: 5.22.0
9
  app_file: app.py
10
  pinned: true
11
  short_description: Transform Your Images into Mesmerizing Hexagon Grids
@@ -36,15 +36,19 @@ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-
36
  ## Description
37
  Welcome to HexaGrid Creator, the ultimate tool for transforming your images into mesmerizing hexagon grid masterpieces! Whether you're a tabletop game enthusiast, a digital artist, or just someone who loves unique patterns, HexaGrid Creator has something for you.
38
 
 
 
39
  ### What Can You Do?
40
- - **Generate Hexagon Grids:** Create stunning hexagon grid overlays on any image with fully customizable parameters. (also square and triangles)
41
- - **AI-Powered Image Generation:** Use AI to generate images based on your prompts and apply hexagon grids to them.
42
- - **Color Exclusion:** Pick and exclude specific colors from your hexagon grid for a cleaner and more refined look.
43
- - **Interactive Customization:** Adjust hexagon size, border size, rotation, background color, and more in real-time.
44
- - **Depth and 3D Model Generation:** Generate depth maps and 3D models from your images for enhanced visualization.
45
- - **Image Filter [Look-Up Table (LUT)] Application:** Apply filters (LUTs) to your images for color grading and enhancement.
46
- - **Pre-rendered Maps:** Access a library of pre-rendered hexagon maps for quick and easy customization.
47
- - **Add Margins:** Add customizable margins around your images for a polished finish.
 
 
48
 
49
  ### Why You'll Love It
50
  - **Fun and Easy to Use:** With an intuitive interface and real-time previews, creating hexagon grids has never been this fun!
@@ -58,11 +62,12 @@ Welcome to HexaGrid Creator, the ultimate tool for transforming your images into
58
  3. **Download and Share:** Once you're happy with your creation, download it and share it with the world!
59
 
60
  ### Advanced Features
61
- - **Generative AI Integration:** Utilize models like `black-forest-labs/FLUX.1-dev` and various LoRA weights for generating unique images.
62
- - **Pre-rendered Maps:** Access a library of pre-rendered hexagon maps for quick and easy customization.
63
- - **Image Filter [Look-Up Table (LUT)] Application:** Apply filters (LUTs) to your images for color grading and enhancement.
64
- - **TRELLIS Depth and 3D Model Generation:** Create depth maps and 3D models from your images for enhanced visualization.
65
- - **Add Margins:** Customize margins around your images for a polished finish.
 
66
 
67
  Join the hive and start creating with HexaGrid Creator today!
68
 
 
5
  colorTo: purple
6
  sdk: gradio
7
  python_version: 3.10.13
8
+ sdk_version: 5.23.1
9
  app_file: app.py
10
  pinned: true
11
  short_description: Transform Your Images into Mesmerizing Hexagon Grids
 
36
  ## Description
37
  Welcome to HexaGrid Creator, the ultimate tool for transforming your images into mesmerizing hexagon grid masterpieces! Whether you're a tabletop game enthusiast, a digital artist, or just someone who loves unique patterns, HexaGrid Creator has something for you.
38
 
39
+ ### <span style='color: red; font-weight: bolder;'>ZeroGPU sometimes crashes or is not available.<br/Try Again in 10 seconds</span>
40
+
41
  ### What Can You Do?
42
+ - **Generate Hexagon Grids:** Create stunning hexagon, square, or triangle grid overlays with fully customizable parameters.
43
+ - **AI-Powered Image Generation:** Use advanced AI models and LoRA weights to generate images from your prompts and apply unique grid overlays.
44
+ - **Color Exclusion:** Pick and exclude specific colors from your hexagon grid for improved clarity.
45
+ - **Interactive Customization:** Adjust grid size, border size, rotation, background color, and more—all in real-time.
46
+ - **Depth & 3D Model Generation:** Generate depth maps and interactive 3D models (with GLB and Gaussian extraction) for enhanced visualization.
47
+ - **Image Filter [LUT] Application:** Apply advanced color grading filters with live previews using LUT files.
48
+ - **Pre-rendered Maps:** Access a library of ready-to-use hexagon maps for quick customization.
49
+ - **Add Margins:** Add customizable margins around your images for a polished print-ready look.
50
+ - **Sketch Pad Integration:** Directly sketch on images to modify or replace them before further processing.
51
+
52
 
53
  ### Why You'll Love It
54
  - **Fun and Easy to Use:** With an intuitive interface and real-time previews, creating hexagon grids has never been this fun!
 
62
  3. **Download and Share:** Once you're happy with your creation, download it and share it with the world!
63
 
64
  ### Advanced Features
65
+ - **Generative AI Integration:** Leverage models like `black-forest-labs/FLUX.1-dev` along with various LoRA weights for unique image generation.
66
+ - **Pre-rendered Maps & Templates:** Quickly select from our curated collection of hexagon map designs.
67
+ - **Image Filter (LUT) Application:** Fine-tune image color grading with advanced LUT support.
68
+ - **TRELLIS Depth & 3D Model Generation:** Create detailed depth maps and 3D models, complete with GLB and Gaussian file extraction.
69
+ - **Add Margins:** Fine-tune image margins for a professional finish.
70
+ - **Sketch Pad Integration:** Use the built-in sketch pad to edit images on the fly before processing.
71
 
72
  Join the hive and start creating with HexaGrid Creator today!
73
 
app.py CHANGED
@@ -1,4 +1,7 @@
 
 
1
  import gradio as gr
 
2
  import spaces
3
  import os
4
  import numpy as np
@@ -52,6 +55,7 @@ from utils.image_utils import (
52
  lerp_imagemath,
53
  shrink_and_paste_on_blank,
54
  show_lut,
 
55
  apply_lut_to_image_path,
56
  multiply_and_blend_images,
57
  alpha_composite_with_control,
@@ -62,7 +66,8 @@ from utils.image_utils import (
62
  build_prerendered_images_by_quality,
63
  get_image_from_dict,
64
  calculate_optimal_fill_dimensions,
65
- save_image_to_temp_png
 
66
  )
67
 
68
  from utils.hex_grid import (
@@ -111,12 +116,8 @@ PIPELINE_CLASSES = {
111
  "FluxFillPipeline": FluxFillPipeline
112
  }
113
 
114
- from utils.version_info import (
115
- versions_html,
116
- #initialize_cuda,
117
- #release_torch_resources,
118
- #get_torch_info
119
- )
120
  #from utils.depth_estimation import (get_depth_map_from_state)
121
 
122
  input_image_palette = []
@@ -125,6 +126,7 @@ current_lut_example_img = gr.State(constants.default_lut_example_img)
125
  user_dir = constants.TMPDIR
126
  lora_models = get_lora_models()
127
  selected_index = gr.State(value=-1)
 
128
 
129
  image_processor: Optional[DPTImageProcessor] = None
130
  depth_model: Optional[DPTForDepthEstimation] = None
@@ -133,6 +135,7 @@ pipe: Optional[Union[FluxPipeline, FluxImg2ImgPipeline, FluxControlPipeline, Flu
133
 
134
  def start_session(req: gr.Request):
135
  print(f"Starting session with hash: {req.session_hash}")
 
136
  user_dir = os.path.join(constants.TMPDIR, str(req.session_hash))
137
  os.makedirs(user_dir, exist_ok=True)
138
 
@@ -140,7 +143,8 @@ def start_session(req: gr.Request):
140
  def end_session(req: gr.Request):
141
  print(f"Ending session with hash: {req.session_hash}")
142
  user_dir = os.path.join(constants.TMPDIR, str(req.session_hash))
143
- shutil.rmtree(user_dir)
 
144
 
145
  # Register the cleanup function
146
  atexit.register(end_session)
@@ -777,7 +781,7 @@ def replace_with_sketch_image(sketch_image, replace_current_lut_example_img: boo
777
  if replace_current_lut_example_img:
778
  current_lut_example_img = sketch
779
  return sketch
780
- ####################################### DEPTH ESTIMATION #######################################
781
 
782
  @spaces.GPU(progress=gr.Progress(track_tqdm=True))
783
  def load_trellis_model():
@@ -790,7 +794,7 @@ def load_trellis_model():
790
  TRELLIS_PIPELINE.cuda()
791
  # Preload with a dummy image to finalize initialization
792
  try:
793
- TRELLIS_PIPELINE.preprocess_image(Image.fromarray(np.zeros((512, 512, 4), dtype=np.uint8))) # Preload rembg
794
  except:
795
  pass
796
  print("TRELLIS_PIPELINE loaded\n")
@@ -961,7 +965,7 @@ def generate_3d_asset_part1(depth_image_source, randomize_seed, seed, input_imag
961
  # Determine the final seed using default MAX_SEED from constants
962
  final_seed = np.random.randint(0, constants.MAX_SEED) if randomize_seed else seed
963
  # Process the image for depth estimation
964
- depth_img = depth_process_image(image_path, resized_width=1536, z_scale=336)
965
  #depth_img = resize_image_with_aspect_ratio(depth_img, 1536, 1536)
966
 
967
  user_dir = os.path.join(constants.TMPDIR, str(req.session_hash))
@@ -970,11 +974,19 @@ def generate_3d_asset_part1(depth_image_source, randomize_seed, seed, input_imag
970
  return depth_img, image_path, output_name, final_seed
971
 
972
  @spaces.GPU(duration=150,progress=gr.Progress(track_tqdm=True))
973
- def generate_3d_asset_part2(depth_img, image_path, output_name, seed, steps, model_resolution, video_resolution, req: gr.Request, progress=gr.Progress(track_tqdm=True)):
974
  # Open image using standardized defaults
975
  image_raw = Image.open(image_path).convert("RGB")
976
  resized_image = resize_image_with_aspect_ratio(image_raw, model_resolution, model_resolution)
977
  depth_img = Image.open(depth_img).convert("RGBA")
 
 
 
 
 
 
 
 
978
 
979
  if TRELLIS_PIPELINE is None:
980
  gr.Warning(f"Trellis Pipeline is not initialized: {TRELLIS_PIPELINE.device()}")
@@ -983,21 +995,38 @@ def generate_3d_asset_part2(depth_img, image_path, output_name, seed, steps, mod
983
  # Preprocess and run the Trellis pipeline with fixed sampler settings
984
  try:
985
  TRELLIS_PIPELINE.cuda()
986
- processed_image = TRELLIS_PIPELINE.preprocess_image(resized_image, max_resolution=model_resolution, remove_bg = False)
987
- outputs = TRELLIS_PIPELINE.run(
988
- processed_image,
989
- seed=seed,
990
- formats=["gaussian", "mesh"],
991
- preprocess_image=False,
992
- sparse_structure_sampler_params={
993
- "steps": steps,
994
- "cfg_strength": 7.5,
995
- },
996
- slat_sampler_params={
997
- "steps": steps,
998
- "cfg_strength": 3.0,
999
- },
1000
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1001
 
1002
  # Validate the mesh
1003
  mesh = outputs['mesh'][0]
@@ -1035,6 +1064,7 @@ def generate_3d_asset_part2(depth_img, image_path, output_name, seed, steps, mod
1035
 
1036
  video = render_utils.render_video(outputs['gaussian'][0], resolution=video_resolution, num_frames=64, r=1, fov=45)['color']
1037
  try:
 
1038
  video_geo = render_utils.render_video(outputs['mesh'][0], resolution=video_resolution, num_frames=64, r=1, fov=45)['normal']
1039
  video = [np.concatenate([video[i], video_geo[i]], axis=1) for i in range(len(video))]
1040
  except Exception as e:
@@ -1043,7 +1073,7 @@ def generate_3d_asset_part2(depth_img, image_path, output_name, seed, steps, mod
1043
  video_path = os.path.join(user_dir, f'{output_name}.mp4')
1044
  imageio.mimsave(video_path, video, fps=8)
1045
 
1046
- #snapshot_results = render_utils.render_snapshot_depth(outputs['mesh'][0], resolution=1280, r=1, fov=80)
1047
  #depth_snapshot = Image.fromarray(snapshot_results['normal'][0]).convert("L")
1048
  depth_snapshot = depth_img
1049
 
@@ -1106,7 +1136,8 @@ def extract_gaussian(state: dict, req: gr.Request, progress=gr.Progress(track_tq
1106
 
1107
  @spaces.GPU()
1108
  def getVersions():
1109
- return versions_html()
 
1110
 
1111
  #generate_input_image_click.zerogpu = True
1112
  #generate_depth_button_click.zerogpu = True
@@ -1122,15 +1153,16 @@ with gr.Blocks(css_paths="style_20250314.css", title=title, theme='Surn/beeuty',
1122
  with gr.Row():
1123
  gr.Markdown("""
1124
  # HexaGrid Creator
1125
- ## Transform Your Images into Mesmerizing Hexagon Grid Masterpieces! ⬢
1126
- ### <span style='color: red; font-style: bolder;'>BEST VIEWED ON DESKTOP</span>""", elem_classes="intro", sanitize_html=False)
 
1127
  with gr.Row():
1128
  with gr.Accordion(" Welcome to HexaGrid Creator, the ultimate tool for transforming your images into stunning hexagon grid artworks. Whether you're a tabletop game enthusiast, a digital artist, or someone who loves unique patterns, HexaGrid Creator has something for you.", open=False, elem_classes="intro"):
1129
  gr.Markdown ("""
1130
 
1131
  ## Drop an image into the Input Image and get started!
1132
 
1133
-
1134
 
1135
  ## What is HexaGrid Creator?
1136
  HexaGrid Creator is a web-based application that allows you to apply a hexagon grid overlay to any image. You can customize the size, color, and opacity of the hexagons, as well as the background and border colors. The result is a visually striking image that looks like it was made from hexagonal tiles!
@@ -1165,7 +1197,7 @@ with gr.Blocks(css_paths="style_20250314.css", title=title, theme='Surn/beeuty',
1165
 
1166
  Join the hive and start creating with HexaGrid Creator today!
1167
 
1168
- """, elem_classes="intro")
1169
  with gr.Row():
1170
  with gr.Column(scale=2):
1171
  input_image = gr.Image(
@@ -1273,234 +1305,239 @@ with gr.Blocks(css_paths="style_20250314.css", title=title, theme='Surn/beeuty',
1273
  with gr.Row():
1274
  blur_button = gr.Button("Blur Input Image", elem_classes="solid")
1275
  blur_sketch_button = gr.Button("Blur Sketch", elem_classes="solid")
1276
- with gr.Row(elem_id="image_gen"):
1277
- with gr.Accordion("Generate AI Image (optional, fun)", open = False):
1278
- with gr.Row():
1279
- with gr.Column(scale=1):
1280
- generate_input_image = gr.Button(
1281
- "Generate from Input Image & Options ",
1282
- elem_id="generate_input_image",
1283
- elem_classes="solid"
1284
- )
1285
- # model_options = gr.Dropdown(
1286
- # label="Choose an AI Model*",
1287
- # choices=constants.MODELS + constants.LORA_WEIGHTS + ["Manual Entry"],
1288
- # value="Cossale/Frames2-Flex.1",
1289
- # elem_classes="solid"
1290
- # )
1291
- model_textbox = gr.Textbox(
1292
- label="LORA/Model",
1293
- value="Cossale/Frames2-Flex.1",
1294
- elem_classes="solid",
1295
- elem_id="inference_model",
1296
- lines=2,
1297
- visible=False
1298
- )
1299
- with gr.Accordion("Choose Image Style*", open=True):
1300
- lora_gallery = gr.Gallery(
1301
- [(open_image(image_path), title) for image_path, title in lora_models],
1302
- label="Styles",
1303
- allow_preview=False, preview=False ,
1304
- columns=2,
1305
- elem_id="lora_gallery",
1306
- show_share_button=False,
1307
- elem_classes="solid", type="filepath",
1308
- object_fit="contain", height="auto", format="png",
1309
- )
1310
- # Update map_options to a Dropdown with choices from constants.PROMPTS keys
1311
  with gr.Row():
1312
- with gr.Column():
1313
- map_options = gr.Dropdown(
1314
- label="Map Options*",
1315
- choices=list(constants.PROMPTS.keys()),
1316
- value="Alien Landscape",
1317
- elem_classes="solid",
1318
- scale=0
1319
- )
1320
- # Add Dropdown for sizing of Images, height and width based on selection. Options are 16x9, 16x10, 4x5, 1x1
1321
- # The values of height and width are based on common resolutions for each aspect ratio
1322
- # Default to 16x9, 912x512
1323
- image_size_ratio = gr.Dropdown(label="Image Aspect Ratio", choices=["16:9", "16:10", "4:5", "4:3", "2:1","3:2","1:1", "9:16", "10:16", "5:4", "3:4","1:2", "2:3"], value="16:9", elem_classes="solid", type="value", scale=0, interactive=True)
1324
- with gr.Column():
1325
- seed_slider = gr.Slider(
1326
- label="Seed",
1327
- minimum=0,
1328
- maximum=constants.MAX_SEED,
1329
- step=1,
1330
- value=0,
1331
- scale=0, randomize=True, elem_id="rnd_seed"
1332
- )
1333
- randomize_seed = gr.Checkbox(label="Randomize seed", value=False, scale=0, interactive=True)
1334
- prompt_textbox = gr.Textbox(
1335
- label="Prompt",
1336
- visible=False,
1337
- elem_classes="solid",
1338
- value="Planetary overhead view, directly from above, centered on the planet’s surface, orthographic (rectangular tabletop_map) alien planet map, Battletech_boardgame scifi world with forests, lakes, oceans, continents and snow at the top and bottom, (middle is dark, no_reflections, no_shadows), looking straight down.",
1339
- lines=4
1340
- )
1341
- negative_prompt_textbox = gr.Textbox(
1342
- label="Negative Prompt",
1343
- visible=False,
1344
- elem_classes="solid",
1345
- value="Earth, low quality, bad anatomy, blurry, cropped, worst quality, shadows, people, humans, reflections, shadows, realistic map of the Earth, isometric, text, camera_angle"
1346
- )
1347
- prompt_notes_label = gr.Label(
1348
- "You should use FRM$ as trigger words. @1.5 minutes",
1349
- elem_classes="solid centered small",
1350
- show_label=False,
1351
- visible=False
1352
- )
1353
- # Keep the change event to maintain functionality
1354
- map_options.change(
1355
- fn=update_prompt_visibility,
1356
- inputs=[map_options],
1357
- outputs=[prompt_textbox, negative_prompt_textbox, prompt_notes_label]
1358
  )
1359
- with gr.Column(scale=2):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1360
  with gr.Row():
1361
- with gr.Column():
1362
- generate_input_image_from_gallery = gr.Button(
1363
- "Generate AI Image from Template Options",
1364
- elem_id="generate_input_image_from_gallery",
 
 
 
 
 
 
 
 
 
 
 
 
1365
  elem_classes="solid"
1366
  )
1367
- with gr.Column():
1368
- replace_input_image_button = gr.Button(
1369
- "Replace Input Image with Template",
1370
- elem_id="prerendered_replace_input_image_button",
1371
- elem_classes="solid"
1372
- )
1373
- with gr.Row():
1374
- with gr.Accordion("Template Images", open = False):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1375
  with gr.Row():
1376
- with gr.Column(scale=2):
1377
- # Gallery from PRE_RENDERED_IMAGES GOES HERE
1378
- prerendered_image_gallery = gr.Gallery(label="Image Gallery", show_label=True, value=build_prerendered_images_by_quality(3,'thumbnail'), elem_id="gallery",
1379
- elem_classes="solid", type="filepath", columns=[3], rows=[3], preview=False ,object_fit="contain", height="auto", format="png",allow_preview=False)
1380
- with gr.Row():
1381
- image_guidance_stength = gr.Slider(label="Image Guidance Strength (prompt percentage)", info="applies to Input, Sketch and Template Image",minimum=0, maximum=1.0, value=0.85, step=0.01, interactive=True)
1382
- with gr.Column(elem_classes="outline-important"):
1383
- with gr.Accordion("Advanced Hexagon Settings", open = False):
1384
- with gr.Accordion("Hex Coloring and Exclusion", open = True):
1385
- with gr.Row():
1386
- with gr.Column():
1387
- color_picker = gr.ColorPicker(label="Pick a color to exclude",value="#505050")
1388
- with gr.Column():
1389
- filter_color = gr.Checkbox(label="Filter Excluded Colors from Sampling", value=False,)
1390
- fill_hex = gr.Checkbox(label="Fill Hex with color from Image", value=True)
1391
- exclude_color_button = gr.Button("Exclude Color", elem_id="exlude_color_button", elem_classes="solid")
1392
- color_display = gr.DataFrame(label="List of Excluded RGBA Colors", headers=["R", "G", "B", "A"], elem_id="excluded_colors", type="array", value=build_dataframe(excluded_color_list), interactive=True, elem_classes="solid centered")
1393
- selected_row = gr.Number(0, label="Selected Row", visible=False)
1394
- delete_button = gr.Button("Delete Row", elem_id="delete_exclusion_button", elem_classes="solid")
1395
- with gr.Accordion("Hex Grid Location on Image", open = False):
1396
- with gr.Row():
1397
- start_x = gr.Number(label="Start X", value=20, minimum=-512, maximum= 512, precision=0)
1398
- start_y = gr.Number(label="Start Y", value=20, minimum=-512, maximum= 512, precision=0)
1399
- end_x = gr.Number(label="End X", value=-20, minimum=-512, maximum= 512, precision=0)
1400
- end_y = gr.Number(label="End Y", value=-20, minimum=-512, maximum= 512, precision=0)
1401
- with gr.Row():
1402
- rotation = gr.Slider(-90, 180, 0.0, 0.1, label="Hexagon Rotation (degree)")
1403
- sides = gr.Dropdown(label="Grid Shapes", info="The shapes that form grids",choices=["triangle", "square", "hexagon"], value="hexagon")
1404
- with gr.Row():
1405
- add_hex_text = gr.Dropdown(label="Add Text to Hexagons", choices=[None, "Column-Row Coordinates", "Column(Letter)-Row Coordinates", "Column-Row(Letter) Coordinates", "Sequential Numbers", "Playing Cards Sequential", "Playing Cards Alternate Red and Black", "Custom List"], value=None)
1406
- x_spacing = gr.Number(label="Adjust Horizontal spacing", value=-14, minimum=-200, maximum=200, precision=1)
1407
- y_spacing = gr.Number(label="Adjust Vertical spacing", value=3, minimum=-200, maximum=200, precision=1)
1408
- with gr.Row():
1409
- custom_text_list = gr.TextArea(label="Custom Text List", value=constants.cards_alternating, visible=False,)
1410
- custom_text_color_list = gr.TextArea(label="Custom Text Color List", value=constants.card_colors_alternating, visible=False)
1411
- with gr.Row():
1412
- hex_text_info = gr.Markdown("""
1413
- ### Text Color uses the Border Color and Border Opacity, unless you use a custom list.
1414
- ### The Custom Text List and Custom Text Color List are repeating comma separated lists.
1415
- ### The custom color list is a comma separated list of hex colors.
1416
- #### Example: "A,2,3,4,5,6,7,8,9,10,J,Q,K", "red,#0000FF,#00FF00,red,#FFFF00,#00FFFF,#FF8000,#FF00FF,#FF0080,#FF8000,#FF0080,lightblue"
1417
- """, elem_id="hex_text_info", visible=False)
1418
- add_hex_text.change(
1419
- fn=lambda x: (
1420
- gr.update(visible=(x == "Custom List")),
1421
- gr.update(visible=(x == "Custom List")),
1422
- gr.update(visible=(x != None))
1423
- ),
1424
- inputs=add_hex_text,
1425
- outputs=[custom_text_list, custom_text_color_list, hex_text_info]
1426
- )
1427
- with gr.Row():
1428
- hex_size = gr.Number(label="Hexagon Size", value=120, minimum=1, maximum=768)
1429
- border_size = gr.Slider(-5,25,value=2,step=1,label="Border Size")
1430
- with gr.Row():
1431
- background_color = gr.ColorPicker(label="Background Color", value="#000000", interactive=True)
1432
- background_opacity = gr.Slider(0,100,0,1,label="Background Opacity %")
1433
- border_color = gr.ColorPicker(label="Border Color", value="#7b7b7b", interactive=True)
1434
- border_opacity = gr.Slider(0,100,50,1,label="Border Opacity %")
1435
- with gr.Row():
1436
- hex_button = gr.Button("Generate Hex Grid!", elem_classes="solid", elem_id="btn-generate")
1437
- with gr.Row():
1438
- output_image = gr.Image(label="Hexagon Grid Image", image_mode = "RGBA", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="ImgOutput",interactive=True)
1439
- overlay_image = gr.Image(label="Hexagon Overlay Image", image_mode = "RGBA", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="ImgOverlay",interactive=True)
1440
- with gr.Accordion("Grid adjustments", open=True):
 
 
 
 
 
 
 
1441
  with gr.Row():
1442
- with gr.Column(scale=1):
1443
- output_grid_tilt = gr.Slider(minimum=-90, maximum=90, value=0, step=0.05, label="Tilt Angle (degrees)")
1444
- output_grid_rotation = gr.Slider(minimum=-180, maximum=180, value=0, step=0.05, label="Rotation Angle (degrees)")
1445
- with gr.Column(scale=1):
1446
- output_alpha_composite = gr.Slider(0,100,50,0.5, label="Alpha Composite Intensity*")
1447
- output_blend_multiply_composite = gr.Slider(0,100,50,0.5, label="Multiply Intensity")
1448
- output_overlay_composite = gr.Slider(0,100,50,0.5, label="Interpolate Intensity")
1449
- with gr.Accordion("Add Margins (for printing)", open=False):
1450
- with gr.Row():
1451
- border_image_source = gr.Radio(label="Add Margins around which Image", choices=["Input Image", "Overlay Image"], value="Overlay Image")
1452
- with gr.Row():
1453
- mask_width = gr.Number(label="Margins Width", value=10, minimum=0, maximum=100, precision=0)
1454
- mask_height = gr.Number(label="Margins Height", value=10, minimum=0, maximum=100, precision=0)
1455
- with gr.Row():
1456
- margin_color = gr.ColorPicker(label="Margin Color", value="#333333FF", interactive=True)
1457
- margin_opacity = gr.Slider(0,100,95,0.5,label="Margin Opacity %")
1458
- with gr.Row():
1459
- add_border_button = gr.Button("Add Margins", elem_classes="solid", variant="secondary")
1460
- with gr.Row():
1461
- bordered_image_output = gr.Image(label="Image with Margins", image_mode="RGBA", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="ImgBordered",interactive=False, show_download_button=True, show_fullscreen_button=True, show_share_button=True)
1462
- accordian_3d = gr.Accordion("Height Maps and 3D (optional, fun)", open=False, elem_id="accordian_3d")
1463
- with accordian_3d:
1464
- with gr.Row():
1465
- depth_image_source = gr.Radio(
1466
- label="Depth Image Source",
1467
- choices=["Input Image", "Hexagon Grid Image", "Overlay Image", "Image with Margins"],
1468
- value="Input Image"
1469
- )
1470
- with gr.Accordion("Advanced 3D Generation Settings", open=False):
1471
  with gr.Row():
1472
- with gr.Column():
1473
- # Use standard seed settings only
1474
- seed_3d = gr.Slider(0, constants.MAX_SEED, label="Seed (3D Generation)", value=0, step=1, randomize=True)
1475
- randomize_seed_3d = gr.Checkbox(label="Randomize Seed (3D Generation)", value=True)
1476
- with gr.Column():
1477
- steps = gr.Slider(6, 36, value=12, step=1, label="Image Sampling Steps", interactive=True)
1478
- video_resolution = gr.Slider(384, 768, value=480, step=32, label="Video Resolution (*danger*)", interactive=True)
1479
- model_resolution = gr.Slider(512, 2304, value=1024, step=64, label="3D Model Resolution", interactive=True)
1480
- with gr.Row():
1481
- generate_3d_asset_button = gr.Button("Generate 3D Asset", elem_classes="solid", variant="secondary", interactive=False)
1482
- with gr.Row():
1483
- depth_output = gr.Image(label="Depth Map", image_mode="L", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="DepthOutput",interactive=False, show_download_button=True, show_fullscreen_button=True, show_share_button=True, height=400)
1484
- with gr.Row():
1485
- # For display: video output and 3D model preview (GLTF)
1486
- video_output = gr.Video(label="3D Asset Video", autoplay=True, loop=True, height=400)
1487
- with gr.Accordion("GLB Extraction Settings", open=False):
1488
  with gr.Row():
1489
- mesh_simplify = gr.Slider(0.9, 0.98, label="Simplify", value=0.95, step=0.01)
1490
- texture_size = gr.Slider(512, 2048, label="Texture Size", value=1024, step=512)
1491
  with gr.Row():
1492
- extract_glb_btn = gr.Button("Extract GLB", interactive=False)
1493
- extract_gaussian_btn = gr.Button("Extract Gaussian", interactive=False)
1494
  with gr.Row():
1495
- with gr.Column(scale=2):
1496
- model_output = gr.Model3D(label="Extracted 3D Model", clear_color=[1.0, 1.0, 1.0, 1.0],
1497
- elem_classes="centered solid imgcontainer", interactive=True)
1498
- with gr.Column(scale=1):
1499
- glb_file = gr.File(label="3D GLTF", elem_classes="solid small centered", height=250)
1500
- gaussian_file = gr.File(label="Gaussian", elem_classes="solid small centered", height=250)
1501
- gr.Markdown("""
1502
- ### Files over 10 MB may not display in the 3D model viewer
1503
- """, elem_id="file_size_info", elem_classes="intro" )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1504
 
1505
  is_multiimage = gr.State(False)
1506
  output_buf = gr.State()
@@ -1700,7 +1737,7 @@ with gr.Blocks(css_paths="style_20250314.css", title=title, theme='Surn/beeuty',
1700
  scroll_to_output=True
1701
  ).then(
1702
  fn=generate_3d_asset_part2,
1703
- inputs=[depth_output, ddd_image_path, ddd_file_name, seed_3d, steps, model_resolution, video_resolution ],
1704
  outputs=[output_buf, video_output, depth_output],
1705
  scroll_to_output=True
1706
  ).then(
@@ -1762,9 +1799,8 @@ if __name__ == "__main__":
1762
  TRELLIS_PIPELINE = TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
1763
  TRELLIS_PIPELINE.to(device)
1764
  try:
1765
- TRELLIS_PIPELINE.preprocess_image(Image.fromarray(np.zeros((512, 512, 3), dtype=np.uint8)), 512) # Preload rembg
1766
  except:
1767
  pass
1768
  hexaGrid.queue(default_concurrency_limit=1,max_size=12,api_open=False)
1769
- hexaGrid.launch(allowed_paths=["assets","/","./assets","images","./images", "./images/prerendered", 'e:/TMP'], favicon_path="./assets/favicon.ico", max_file_size="10mb")
1770
-
 
1
+ from ast import Str
2
+ from tokenize import String
3
  import gradio as gr
4
+ from numba.core.types import string
5
  import spaces
6
  import os
7
  import numpy as np
 
55
  lerp_imagemath,
56
  shrink_and_paste_on_blank,
57
  show_lut,
58
+ apply_lut,
59
  apply_lut_to_image_path,
60
  multiply_and_blend_images,
61
  alpha_composite_with_control,
 
66
  build_prerendered_images_by_quality,
67
  get_image_from_dict,
68
  calculate_optimal_fill_dimensions,
69
+ save_image_to_temp_png,
70
+ combine_depth_map_with_Image
71
  )
72
 
73
  from utils.hex_grid import (
 
116
  "FluxFillPipeline": FluxFillPipeline
117
  }
118
 
119
+ import utils.version_info as version_info
120
+
 
 
 
 
121
  #from utils.depth_estimation import (get_depth_map_from_state)
122
 
123
  input_image_palette = []
 
126
  user_dir = constants.TMPDIR
127
  lora_models = get_lora_models()
128
  selected_index = gr.State(value=-1)
129
+ #html_versions = version_info.versions_html()
130
 
131
  image_processor: Optional[DPTImageProcessor] = None
132
  depth_model: Optional[DPTForDepthEstimation] = None
 
135
 
136
  def start_session(req: gr.Request):
137
  print(f"Starting session with hash: {req.session_hash}")
138
+ session_hash = str(req.session_hash)
139
  user_dir = os.path.join(constants.TMPDIR, str(req.session_hash))
140
  os.makedirs(user_dir, exist_ok=True)
141
 
 
143
  def end_session(req: gr.Request):
144
  print(f"Ending session with hash: {req.session_hash}")
145
  user_dir = os.path.join(constants.TMPDIR, str(req.session_hash))
146
+ if os.path.exists(user_dir):
147
+ shutil.rmtree(user_dir)
148
 
149
  # Register the cleanup function
150
  atexit.register(end_session)
 
781
  if replace_current_lut_example_img:
782
  current_lut_example_img = sketch
783
  return sketch
784
+ ################################################################## DEPTH ESTIMATION ###############################################################################
785
 
786
  @spaces.GPU(progress=gr.Progress(track_tqdm=True))
787
  def load_trellis_model():
 
794
  TRELLIS_PIPELINE.cuda()
795
  # Preload with a dummy image to finalize initialization
796
  try:
797
+ TRELLIS_PIPELINE.preprocess_image(Image.fromarray(np.zeros((518, 518, 4), dtype=np.uint8))) # Preload rembg
798
  except:
799
  pass
800
  print("TRELLIS_PIPELINE loaded\n")
 
965
  # Determine the final seed using default MAX_SEED from constants
966
  final_seed = np.random.randint(0, constants.MAX_SEED) if randomize_seed else seed
967
  # Process the image for depth estimation
968
+ depth_img = depth_process_image(image_path, resized_width=1536, z_scale=208)
969
  #depth_img = resize_image_with_aspect_ratio(depth_img, 1536, 1536)
970
 
971
  user_dir = os.path.join(constants.TMPDIR, str(req.session_hash))
 
974
  return depth_img, image_path, output_name, final_seed
975
 
976
  @spaces.GPU(duration=150,progress=gr.Progress(track_tqdm=True))
977
+ def generate_3d_asset_part2(depth_img, image_path, output_name, seed, steps, model_resolution, video_resolution, depth_alpha: int, multi_mode: str, req: gr.Request, progress=gr.Progress(track_tqdm=True)):
978
  # Open image using standardized defaults
979
  image_raw = Image.open(image_path).convert("RGB")
980
  resized_image = resize_image_with_aspect_ratio(image_raw, model_resolution, model_resolution)
981
  depth_img = Image.open(depth_img).convert("RGBA")
982
+ resized_depth_image = resize_image_with_aspect_ratio(depth_img, model_resolution, model_resolution)
983
+ resized_depth_image_enhanced = apply_lut(resized_depth_image, "./LUT/Contrast.cube", 100)
984
+ combined_depth_img = combine_depth_map_with_Image(resized_image, resized_depth_image_enhanced, model_resolution, model_resolution, depth_alpha)
985
+
986
+ use_multi = (multi_mode != "singleimage")
987
+ if use_multi:
988
+ images = [resized_image, combined_depth_img, resized_depth_image]
989
+ processed_images = [TRELLIS_PIPELINE.preprocess_image(image, max_resolution=model_resolution, remove_bg=False) for image in images]
990
 
991
  if TRELLIS_PIPELINE is None:
992
  gr.Warning(f"Trellis Pipeline is not initialized: {TRELLIS_PIPELINE.device()}")
 
995
  # Preprocess and run the Trellis pipeline with fixed sampler settings
996
  try:
997
  TRELLIS_PIPELINE.cuda()
998
+ if not use_multi:
999
+ processed_image = TRELLIS_PIPELINE.preprocess_image(combined_depth_img, max_resolution=model_resolution, remove_bg = False)
1000
+ outputs = TRELLIS_PIPELINE.run(
1001
+ processed_image,
1002
+ seed=seed,
1003
+ formats=["gaussian", "mesh"],
1004
+ preprocess_image=False,
1005
+ sparse_structure_sampler_params={
1006
+ "steps": steps,
1007
+ "cfg_strength": 7.5,
1008
+ },
1009
+ slat_sampler_params={
1010
+ "steps": steps,
1011
+ "cfg_strength": 3.0,
1012
+ },
1013
+ )
1014
+ else:
1015
+ outputs = TRELLIS_PIPELINE.run_multi_image(
1016
+ processed_images,
1017
+ seed=seed,
1018
+ formats=["gaussian", "mesh"],
1019
+ preprocess_image=False,
1020
+ sparse_structure_sampler_params={
1021
+ "steps": steps,
1022
+ "cfg_strength": 7.5,
1023
+ },
1024
+ slat_sampler_params={
1025
+ "steps": steps,
1026
+ "cfg_strength": 3.0,
1027
+ },
1028
+ mode=multi_mode,
1029
+ )
1030
 
1031
  # Validate the mesh
1032
  mesh = outputs['mesh'][0]
 
1064
 
1065
  video = render_utils.render_video(outputs['gaussian'][0], resolution=video_resolution, num_frames=64, r=1, fov=45)['color']
1066
  try:
1067
+ #video_rf = render_utils.render_video(outputs['radiance_field'][0], resolution=video_resolution, num_frames=64, r=1, fov=45)['color']
1068
  video_geo = render_utils.render_video(outputs['mesh'][0], resolution=video_resolution, num_frames=64, r=1, fov=45)['normal']
1069
  video = [np.concatenate([video[i], video_geo[i]], axis=1) for i in range(len(video))]
1070
  except Exception as e:
 
1073
  video_path = os.path.join(user_dir, f'{output_name}.mp4')
1074
  imageio.mimsave(video_path, video, fps=8)
1075
 
1076
+ #snapshot_results = render_utils.render_snapshot_depth(outputs['radiance_field'][0], resolution=1280, r=1, fov=80)
1077
  #depth_snapshot = Image.fromarray(snapshot_results['normal'][0]).convert("L")
1078
  depth_snapshot = depth_img
1079
 
 
1136
 
1137
  @spaces.GPU()
1138
  def getVersions():
1139
+ #return html_versions
1140
+ return version_info.versions_html()
1141
 
1142
  #generate_input_image_click.zerogpu = True
1143
  #generate_depth_button_click.zerogpu = True
 
1153
  with gr.Row():
1154
  gr.Markdown("""
1155
  # HexaGrid Creator
1156
+ ## Transform Your Images into Mesmerizing Hexagon Grid Masterpieces with Advanced AI, 3D Depth, and Interactive Filters! ⬢
1157
+ ### <span style='color: red; font-weight: bolder;'>BEST VIEWED ON DESKTOP – New Sketch Pad, Image Filters, and 3D Features Enabled</span>
1158
+ """, elem_classes="intro", sanitize_html=False)
1159
  with gr.Row():
1160
  with gr.Accordion(" Welcome to HexaGrid Creator, the ultimate tool for transforming your images into stunning hexagon grid artworks. Whether you're a tabletop game enthusiast, a digital artist, or someone who loves unique patterns, HexaGrid Creator has something for you.", open=False, elem_classes="intro"):
1161
  gr.Markdown ("""
1162
 
1163
  ## Drop an image into the Input Image and get started!
1164
 
1165
+ ### <span style='color: red; font-weight: bolder;'>ZeroGPU sometimes crashes or is not available. It is not a code issue.</span>
1166
 
1167
  ## What is HexaGrid Creator?
1168
  HexaGrid Creator is a web-based application that allows you to apply a hexagon grid overlay to any image. You can customize the size, color, and opacity of the hexagons, as well as the background and border colors. The result is a visually striking image that looks like it was made from hexagonal tiles!
 
1197
 
1198
  Join the hive and start creating with HexaGrid Creator today!
1199
 
1200
+ """, elem_classes="intro", sanitize_html=False)
1201
  with gr.Row():
1202
  with gr.Column(scale=2):
1203
  input_image = gr.Image(
 
1305
  with gr.Row():
1306
  blur_button = gr.Button("Blur Input Image", elem_classes="solid")
1307
  blur_sketch_button = gr.Button("Blur Sketch", elem_classes="solid")
1308
+ with gr.Tabs(selected="hex_gen") as input_tabs:
1309
+ with gr.Tab("HexaGrid Generation", id="hex_gen") as hexa_gen_tab:
1310
+ with gr.Column(elem_classes="outline-important"):
1311
+ with gr.Accordion("Advanced Hexagon Settings", open = False):
1312
+ with gr.Accordion("Hex Coloring and Exclusion", open = True):
1313
+ with gr.Row():
1314
+ color_picker = gr.ColorPicker(label="Pick a color to exclude",value="#505050")
1315
+ filter_color = gr.Checkbox(label="Filter Excluded Colors from Sampling", value=False,)
1316
+ exclude_color_button = gr.Button("Exclude Color", elem_id="exlude_color_button", elem_classes="solid")
1317
+ color_display = gr.DataFrame(label="List of Excluded RGBA Colors", headers=["R", "G", "B", "A"], elem_id="excluded_colors", type="array", value=build_dataframe(excluded_color_list), interactive=True, elem_classes="solid centered")
1318
+ selected_row = gr.Number(0, label="Selected Row", visible=False)
1319
+ delete_button = gr.Button("Delete Row", elem_id="delete_exclusion_button", elem_classes="solid")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1320
  with gr.Row():
1321
+ start_x = gr.Number(label="Start X", value=20, minimum=-512, maximum= 512, precision=0)
1322
+ start_y = gr.Number(label="Start Y", value=20, minimum=-512, maximum= 512, precision=0)
1323
+ end_x = gr.Number(label="End X", value=-20, minimum=-512, maximum= 512, precision=0)
1324
+ end_y = gr.Number(label="End Y", value=-20, minimum=-512, maximum= 512, precision=0)
1325
+ with gr.Row():
1326
+ rotation = gr.Slider(-90, 180, 0.0, 0.1, label="Hexagon Rotation (degree)")
1327
+ sides = gr.Dropdown(label="Grid Shapes", info="The shapes that form grids",choices=["triangle", "square", "hexagon"], value="hexagon", allow_custom_value=False)
1328
+ with gr.Row():
1329
+ add_hex_text = gr.Dropdown(label="Add Text to Hexagons", choices=[None, "Column-Row Coordinates", "Column(Letter)-Row Coordinates", "Column-Row(Letter) Coordinates", "Sequential Numbers", "Playing Cards Sequential", "Playing Cards Alternate Red and Black", "Custom List"], value=None, allow_custom_value=False)
1330
+ x_spacing = gr.Number(label="Adjust Horizontal spacing", value=-14, minimum=-200, maximum=200, precision=1)
1331
+ y_spacing = gr.Number(label="Adjust Vertical spacing", value=3, minimum=-200, maximum=200, precision=1)
1332
+ with gr.Row():
1333
+ custom_text_list = gr.TextArea(label="Custom Text List", value=constants.cards_alternating, visible=False,)
1334
+ custom_text_color_list = gr.TextArea(label="Custom Text Color List", value=constants.card_colors_alternating, visible=False)
1335
+ with gr.Row():
1336
+ hex_text_info = gr.Markdown("""
1337
+ ### Text Color uses the Border Color and Border Opacity, unless you use a custom list.
1338
+ ### The Custom Text List and Custom Text Color List are repeating comma separated lists.
1339
+ ### The custom color list is a comma separated list of hex colors.
1340
+ #### Example: "A,2,3,4,5,6,7,8,9,10,J,Q,K", "red,#0000FF,#00FF00,red,#FFFF00,#00FFFF,#FF8000,#FF00FF,#FF0080,#FF8000,#FF0080,lightblue"
1341
+ """, elem_id="hex_text_info", visible=False)
1342
+ add_hex_text.change(
1343
+ fn=lambda x: (
1344
+ gr.update(visible=(x == "Custom List")),
1345
+ gr.update(visible=(x == "Custom List")),
1346
+ gr.update(visible=(x != None))
1347
+ ),
1348
+ inputs=add_hex_text,
1349
+ outputs=[custom_text_list, custom_text_color_list, hex_text_info]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1350
  )
1351
+ with gr.Row():
1352
+ hex_size = gr.Number(label="Hexagon Size", value=120, minimum=1, maximum=768)
1353
+ border_size = gr.Slider(-5,25,value=2,step=1,label="Border Size")
1354
+ fill_hex = gr.Checkbox(label="Fill Hex with color from Image", value=True)
1355
+ with gr.Row():
1356
+ background_color = gr.ColorPicker(label="Background Color", value="#000000", interactive=True)
1357
+ background_opacity = gr.Slider(0,100,0,1,label="Background Opacity %")
1358
+ border_color = gr.ColorPicker(label="Border Color", value="#7b7b7b", interactive=True)
1359
+ border_opacity = gr.Slider(0,100,50,1,label="Border Opacity %")
1360
+ with gr.Row():
1361
+ hex_button = gr.Button("Generate Hex Grid!", elem_classes="solid", elem_id="btn-generate")
1362
+ with gr.Row():
1363
+ output_image = gr.Image(label="Hexagon Grid Image", image_mode = "RGBA", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="ImgOutput",interactive=True)
1364
+ overlay_image = gr.Image(label="Hexagon Overlay Image", image_mode = "RGBA", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="ImgOverlay",interactive=True)
1365
+ with gr.Accordion("Grid adjustments", open=True):
1366
  with gr.Row():
1367
+ with gr.Column(scale=1):
1368
+ output_grid_tilt = gr.Slider(minimum=-90, maximum=90, value=0, step=0.05, label="Tilt Angle (degrees)")
1369
+ output_grid_rotation = gr.Slider(minimum=-180, maximum=180, value=0, step=0.05, label="Rotation Angle (degrees)")
1370
+ with gr.Column(scale=1):
1371
+ output_alpha_composite = gr.Slider(0,100,50,0.5, label="Alpha Composite Intensity*")
1372
+ output_blend_multiply_composite = gr.Slider(0,100,50,0.5, label="Multiply Intensity")
1373
+ output_overlay_composite = gr.Slider(0,100,50,0.5, label="Interpolate Intensity")
1374
+
1375
+ with gr.Tab("Image Generation (AI)", id="image_gen") as image_gen_tab:
1376
+ with gr.Row(elem_id="image_gen"):
1377
+ with gr.Accordion("Generate AI Image (optional, fun)", open = False):
1378
+ with gr.Row():
1379
+ with gr.Column(scale=1):
1380
+ generate_input_image = gr.Button(
1381
+ "Generate from Input Image & Options ",
1382
+ elem_id="generate_input_image",
1383
  elem_classes="solid"
1384
  )
1385
+ # model_options = gr.Dropdown(
1386
+ # label="Choose an AI Model*",
1387
+ # choices=constants.MODELS + constants.LORA_WEIGHTS + ["Manual Entry"],
1388
+ # value="Cossale/Frames2-Flex.1",
1389
+ # elem_classes="solid", allow_custom_value=False
1390
+ # )
1391
+ model_textbox = gr.Textbox(
1392
+ label="LORA/Model",
1393
+ value="Cossale/Frames2-Flex.1",
1394
+ elem_classes="solid",
1395
+ elem_id="inference_model",
1396
+ lines=2,
1397
+ visible=False
1398
+ )
1399
+ with gr.Accordion("Choose Image Style*", open=True):
1400
+ lora_gallery = gr.Gallery(
1401
+ [(open_image(image_path), title) for image_path, title in lora_models],
1402
+ label="Styles",
1403
+ allow_preview=False, preview=False ,
1404
+ columns=2,
1405
+ elem_id="lora_gallery",
1406
+ show_share_button=False,
1407
+ elem_classes="solid", type="filepath",
1408
+ object_fit="contain", height="auto", format="png",
1409
+ )
1410
+ # Update map_options to a Dropdown with choices from constants.PROMPTS keys
1411
  with gr.Row():
1412
+ with gr.Column():
1413
+ map_options = gr.Dropdown(
1414
+ label="Map Options*",
1415
+ choices=list(constants.PROMPTS.keys()),
1416
+ value="Alien Landscape",
1417
+ elem_classes="solid",
1418
+ scale=0, allow_custom_value=False
1419
+ )
1420
+ # Add Dropdown for sizing of Images, height and width based on selection. Options are 16x9, 16x10, 4x5, 1x1
1421
+ # The values of height and width are based on common resolutions for each aspect ratio
1422
+ # Default to 16x9, 912x512
1423
+ image_size_ratio = gr.Dropdown(label="Image Aspect Ratio", choices=["16:9", "16:10", "4:5", "4:3", "2:1","3:2","1:1", "9:16", "10:16", "5:4", "3:4","1:2", "2:3"], value="16:9", elem_classes="solid", type="value", scale=0, interactive=True, allow_custom_value=False)
1424
+ with gr.Column():
1425
+ seed_slider = gr.Slider(
1426
+ label="Seed",
1427
+ minimum=0,
1428
+ maximum=constants.MAX_SEED,
1429
+ step=1,
1430
+ value=0,
1431
+ scale=0, randomize=True, elem_id="rnd_seed"
1432
+ )
1433
+ randomize_seed = gr.Checkbox(label="Randomize seed", value=False, scale=0, interactive=True)
1434
+ prompt_textbox = gr.Textbox(
1435
+ label="Prompt",
1436
+ visible=False,
1437
+ elem_classes="solid",
1438
+ value="Planetary overhead view, directly from above, centered on the planet’s surface, orthographic (rectangular tabletop_map) alien planet map, Battletech_boardgame scifi world with forests, lakes, oceans, continents and snow at the top and bottom, (middle is dark, no_reflections, no_shadows), looking straight down.",
1439
+ lines=4
1440
+ )
1441
+ negative_prompt_textbox = gr.Textbox(
1442
+ label="Negative Prompt",
1443
+ visible=False,
1444
+ elem_classes="solid",
1445
+ value="Earth, low quality, bad anatomy, blurry, cropped, worst quality, shadows, people, humans, reflections, shadows, realistic map of the Earth, isometric, text, camera_angle"
1446
+ )
1447
+ prompt_notes_label = gr.Label(
1448
+ "You should use FRM$ as trigger words. @1.5 minutes",
1449
+ elem_classes="solid centered small",
1450
+ show_label=False,
1451
+ visible=False
1452
+ )
1453
+ # Keep the change event to maintain functionality
1454
+ map_options.change(
1455
+ fn=update_prompt_visibility,
1456
+ inputs=[map_options],
1457
+ outputs=[prompt_textbox, negative_prompt_textbox, prompt_notes_label]
1458
+ )
1459
+ with gr.Column(scale=2):
1460
+ with gr.Row():
1461
+ with gr.Column():
1462
+ generate_input_image_from_gallery = gr.Button(
1463
+ "Generate AI Image from Template Options",
1464
+ elem_id="generate_input_image_from_gallery",
1465
+ elem_classes="solid"
1466
+ )
1467
+ with gr.Column():
1468
+ replace_input_image_button = gr.Button(
1469
+ "Replace Input Image with Template",
1470
+ elem_id="prerendered_replace_input_image_button",
1471
+ elem_classes="solid"
1472
+ )
1473
+ with gr.Row():
1474
+ with gr.Accordion("Template Images", open = False):
1475
+ with gr.Row():
1476
+ with gr.Column(scale=2):
1477
+ # Gallery from PRE_RENDERED_IMAGES GOES HERE
1478
+ prerendered_image_gallery = gr.Gallery(label="Image Gallery", show_label=True, value=build_prerendered_images_by_quality(3,'thumbnail'), elem_id="gallery",
1479
+ elem_classes="solid", type="filepath", columns=[3], rows=[3], preview=False ,object_fit="contain", height="auto", format="png",allow_preview=False)
1480
+ with gr.Row():
1481
+ image_guidance_stength = gr.Slider(label="Image Guidance Strength (prompt percentage)", info="applies to Input, Sketch and Template Image",minimum=0, maximum=1.0, value=0.85, step=0.01, interactive=True)
1482
+
1483
+ with gr.Tab("Add Margins", id="margins") as margins_tab:
1484
  with gr.Row():
1485
+ border_image_source = gr.Radio(label="Add Margins around which Image", choices=["Input Image", "Overlay Image"], value="Overlay Image")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1486
  with gr.Row():
1487
+ mask_width = gr.Number(label="Margins Width", value=10, minimum=0, maximum=100, precision=0)
1488
+ mask_height = gr.Number(label="Margins Height", value=10, minimum=0, maximum=100, precision=0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1489
  with gr.Row():
1490
+ margin_color = gr.ColorPicker(label="Margin Color", value="#333333FF", interactive=True)
1491
+ margin_opacity = gr.Slider(0,100,95,0.5,label="Margin Opacity %")
1492
  with gr.Row():
1493
+ add_border_button = gr.Button("Add Margins", elem_classes="solid", variant="secondary")
 
1494
  with gr.Row():
1495
+ bordered_image_output = gr.Image(label="Image with Margins", image_mode="RGBA", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="ImgBordered",interactive=False, show_download_button=True, show_fullscreen_button=True, show_share_button=True)
1496
+ with gr.Tab("3D and Depth (fun)", id="3D") as depth_tab:
1497
+ accordian_3d = gr.Accordion("Clicking here to toggle between Image Generation and 3D models", open=False, elem_id="accordian_3d")
1498
+ with accordian_3d:
1499
+ with gr.Row():
1500
+ depth_image_source = gr.Radio(
1501
+ label="Depth Image Source",
1502
+ choices=["Input Image", "Hexagon Grid Image", "Overlay Image", "Image with Margins"],
1503
+ value="Input Image"
1504
+ )
1505
+ with gr.Accordion("Advanced 3D Generation Settings", open=False):
1506
+ with gr.Row():
1507
+ with gr.Column():
1508
+ # Use standard seed settings only
1509
+ seed_3d = gr.Slider(0, constants.MAX_SEED, label="Seed (3D Generation)", value=0, step=1, randomize=True)
1510
+ randomize_seed_3d = gr.Checkbox(label="Randomize Seed (3D Generation)", value=True)
1511
+ depth_alpha = gr.Slider(-200,200,0, step=5, label="Amount of Depth Image to apply to main Image", interactive=True)
1512
+ multiimage_algo = gr.Radio(["singleimage","stochastic", "multidiffusion"], label="Multi-image Algorithm", value="singleimage")
1513
+ with gr.Column():
1514
+ steps = gr.Slider(6, 100, value=25, step=1, label="Image Sampling Steps", interactive=True)
1515
+ video_resolution = gr.Slider(384, 768, value=480, step=32, label="Video Resolution (*danger*)", interactive=True)
1516
+ model_resolution = gr.Slider(518, 2520, value=1540, step=28, label="3D Model Resolution", interactive=True)
1517
+ with gr.Row():
1518
+ generate_3d_asset_button = gr.Button("Generate 3D Asset", elem_classes="solid", variant="secondary", interactive=False)
1519
+ with gr.Row():
1520
+ depth_output = gr.Image(label="Depth Map", image_mode="L", elem_classes="centered solid imgcontainer", format="PNG", type="filepath", key="DepthOutput",interactive=False, show_download_button=True, show_fullscreen_button=True, show_share_button=True, height=400)
1521
+ with gr.Row():
1522
+ # For display: video output and 3D model preview (GLTF)
1523
+ video_output = gr.Video(label="3D Asset Video", autoplay=True, loop=True, height=400)
1524
+ with gr.Accordion("GLB Extraction Settings", open=False):
1525
+ with gr.Row():
1526
+ mesh_simplify = gr.Slider(0.9, 0.98, label="Simplify", value=0.95, step=0.01)
1527
+ texture_size = gr.Slider(512, 2048, label="Texture Size", value=1024, step=512)
1528
+ with gr.Row():
1529
+ extract_glb_btn = gr.Button("Extract GLB", interactive=False)
1530
+ extract_gaussian_btn = gr.Button("Extract Gaussian", interactive=False)
1531
+ with gr.Row():
1532
+ with gr.Column(scale=2):
1533
+ model_output = gr.Model3D(label="Extracted 3D Model", clear_color=[1.0, 1.0, 1.0, 1.0],
1534
+ elem_classes="centered solid imgcontainer", interactive=True)
1535
+ with gr.Column(scale=1):
1536
+ glb_file = gr.File(label="3D GLTF", elem_classes="solid small centered", height=250)
1537
+ gaussian_file = gr.File(label="Gaussian", elem_classes="solid small centered", height=250)
1538
+ gr.Markdown("""
1539
+ ### Files over 10 MB may not display in the 3D model viewer
1540
+ """, elem_id="file_size_info", elem_classes="intro" )
1541
 
1542
  is_multiimage = gr.State(False)
1543
  output_buf = gr.State()
 
1737
  scroll_to_output=True
1738
  ).then(
1739
  fn=generate_3d_asset_part2,
1740
+ inputs=[depth_output, ddd_image_path, ddd_file_name, seed_3d, steps, model_resolution, video_resolution, depth_alpha, multiimage_algo],
1741
  outputs=[output_buf, video_output, depth_output],
1742
  scroll_to_output=True
1743
  ).then(
 
1799
  TRELLIS_PIPELINE = TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
1800
  TRELLIS_PIPELINE.to(device)
1801
  try:
1802
+ TRELLIS_PIPELINE.preprocess_image(Image.fromarray(np.zeros((512, 512, 3), dtype=np.uint8)), 512, True) # Preload rembg
1803
  except:
1804
  pass
1805
  hexaGrid.queue(default_concurrency_limit=1,max_size=12,api_open=False)
1806
+ hexaGrid.launch(allowed_paths=["assets","/","./assets","images","./images", "./images/prerendered", 'e:/TMP'], favicon_path="./assets/favicon.ico", max_file_size="10mb")
 
style_20250314.css CHANGED
@@ -21,7 +21,7 @@
21
  background-color: rgba(242, 218, 163, 0.62);
22
  }
23
 
24
- .dark .gradio-container.gradio-container-5-22-0 .contain .intro .prose {
25
  background-color: rgba(41, 18, 5, 0.38) !important;
26
  }
27
  .toast-body.info {
@@ -165,3 +165,23 @@ a {
165
  padding: 2px;
166
  border-radius: 6px;
167
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  background-color: rgba(242, 218, 163, 0.62);
22
  }
23
 
24
+ .dark .gradio-container.gradio-container-5-23-1 .contain .intro .prose {
25
  background-color: rgba(41, 18, 5, 0.38) !important;
26
  }
27
  .toast-body.info {
 
165
  padding: 2px;
166
  border-radius: 6px;
167
  }
168
+ .selected.svelte-1tcem6n.svelte-1tcem6n {
169
+ font-size: large;
170
+ font-weight: bold;
171
+ color: var(--body-text-color);
172
+ }
173
+ .tab-wrapper.svelte-1tcem6n.svelte-1tcem6n {
174
+ height: var(--size-12);
175
+ padding-bottom: var(--size-1);
176
+ text-align: center;
177
+ background-blend-mode: multiply;
178
+ border-radius: var(--block-radius);
179
+ background-color: var(--block-background-fill);
180
+
181
+ outline-color: var(--accordion-text-color);
182
+ outline-style: solid;
183
+ outline-width: 2px;
184
+ outline-offset: 2px;
185
+ padding: 2px;
186
+ border-radius: 6px;
187
+ }
trellis/pipelines/trellis_image_to_3d.py CHANGED
@@ -87,6 +87,7 @@ class TrellisImageTo3DPipeline(Pipeline):
87
  Preprocess the input image.
88
  """
89
  # if has alpha channel, use it directly; otherwise, remove background
 
90
  has_alpha = False
91
  if input.mode == 'RGBA':
92
  alpha = np.array(input)[:, :, 3]
@@ -114,8 +115,9 @@ class TrellisImageTo3DPipeline(Pipeline):
114
  size = max(bbox[2] - bbox[0], bbox[3] - bbox[1])
115
  size = int(size * 1.2)
116
  bbox = center[0] - size // 2, center[1] - size // 2, center[0] + size // 2, center[1] + size // 2
117
- output = output.crop(bbox) # type: ignore
118
- output = output.resize((518, 518), Image.Resampling.LANCZOS)
 
119
  output = np.array(output).astype(np.float32) / 255
120
  output = output[:, :, :3] * output[:, :, 3:4]
121
  output = Image.fromarray((output * 255).astype(np.uint8))
@@ -136,13 +138,15 @@ class TrellisImageTo3DPipeline(Pipeline):
136
  assert image.ndim == 4, "Image tensor should be batched (B, C, H, W)"
137
  elif isinstance(image, list):
138
  assert all(isinstance(i, Image.Image) for i in image), "Image list should be list of PIL images"
139
- image = [i.resize((518, 518), Image.LANCZOS) for i in image]
 
 
140
  image = [np.array(i.convert('RGB')).astype(np.float32) / 255 for i in image]
141
  image = [torch.from_numpy(i).permute(2, 0, 1).float() for i in image]
142
  image = torch.stack(image).to(self.device)
143
  else:
144
  raise ValueError(f"Unsupported type of image: {type(image)}")
145
-
146
  image = self.image_cond_model_transform(image).to(self.device)
147
  features = self.models['image_cond_model'](image, is_training=True)['x_prenorm']
148
  patchtokens = F.layer_norm(features, features.shape[-1:])
@@ -267,6 +271,7 @@ class TrellisImageTo3DPipeline(Pipeline):
267
  slat_sampler_params: dict = {},
268
  formats: List[str] = ['mesh', 'gaussian', 'radiance_field'],
269
  preprocess_image: bool = True,
 
270
  remove_bg: bool = True,
271
  ) -> dict:
272
  """
@@ -280,7 +285,7 @@ class TrellisImageTo3DPipeline(Pipeline):
280
  preprocess_image (bool): Whether to preprocess the image.
281
  """
282
  if preprocess_image:
283
- image = self.preprocess_image(image, remove_bg=remove_bg)
284
  cond = self.get_cond([image])
285
  torch.manual_seed(seed)
286
  coords = self.sample_sparse_structure(cond, num_samples, sparse_structure_sampler_params)
@@ -355,6 +360,7 @@ class TrellisImageTo3DPipeline(Pipeline):
355
  formats: List[str] = ['mesh', 'gaussian', 'radiance_field'],
356
  preprocess_image: bool = True,
357
  mode: Literal['stochastic', 'multidiffusion'] = 'stochastic',
 
358
  remove_bg: bool = True,
359
  ) -> dict:
360
  """
@@ -368,7 +374,7 @@ class TrellisImageTo3DPipeline(Pipeline):
368
  preprocess_image (bool): Whether to preprocess the image.
369
  """
370
  if preprocess_image:
371
- images = [self.preprocess_image(image,remove_bg=remove_bg) for image in images]
372
  cond = self.get_cond(images)
373
  cond['neg_cond'] = cond['neg_cond'][:1]
374
  torch.manual_seed(seed)
 
87
  Preprocess the input image.
88
  """
89
  # if has alpha channel, use it directly; otherwise, remove background
90
+ aspect_ratio = int(input.width / input.height)
91
  has_alpha = False
92
  if input.mode == 'RGBA':
93
  alpha = np.array(input)[:, :, 3]
 
115
  size = max(bbox[2] - bbox[0], bbox[3] - bbox[1])
116
  size = int(size * 1.2)
117
  bbox = center[0] - size // 2, center[1] - size // 2, center[0] + size // 2, center[1] + size // 2
118
+ output = output.crop(bbox) # type: ignore
119
+ new_width = round((588 * aspect_ratio) / 14) * 14
120
+ output = output.resize((new_width, 588), Image.Resampling.LANCZOS)
121
  output = np.array(output).astype(np.float32) / 255
122
  output = output[:, :, :3] * output[:, :, 3:4]
123
  output = Image.fromarray((output * 255).astype(np.uint8))
 
138
  assert image.ndim == 4, "Image tensor should be batched (B, C, H, W)"
139
  elif isinstance(image, list):
140
  assert all(isinstance(i, Image.Image) for i in image), "Image list should be list of PIL images"
141
+ aspect_ratio = int(image[0].width / image[0].height)
142
+ new_width = round((588 * aspect_ratio) / 14) * 14
143
+ image = [i.resize((new_width, 588), Image.LANCZOS) for i in image]
144
  image = [np.array(i.convert('RGB')).astype(np.float32) / 255 for i in image]
145
  image = [torch.from_numpy(i).permute(2, 0, 1).float() for i in image]
146
  image = torch.stack(image).to(self.device)
147
  else:
148
  raise ValueError(f"Unsupported type of image: {type(image)}")
149
+
150
  image = self.image_cond_model_transform(image).to(self.device)
151
  features = self.models['image_cond_model'](image, is_training=True)['x_prenorm']
152
  patchtokens = F.layer_norm(features, features.shape[-1:])
 
271
  slat_sampler_params: dict = {},
272
  formats: List[str] = ['mesh', 'gaussian', 'radiance_field'],
273
  preprocess_image: bool = True,
274
+ max_resolution: int =1024,
275
  remove_bg: bool = True,
276
  ) -> dict:
277
  """
 
285
  preprocess_image (bool): Whether to preprocess the image.
286
  """
287
  if preprocess_image:
288
+ image = self.preprocess_image(image, max_resolution, remove_bg=remove_bg)
289
  cond = self.get_cond([image])
290
  torch.manual_seed(seed)
291
  coords = self.sample_sparse_structure(cond, num_samples, sparse_structure_sampler_params)
 
360
  formats: List[str] = ['mesh', 'gaussian', 'radiance_field'],
361
  preprocess_image: bool = True,
362
  mode: Literal['stochastic', 'multidiffusion'] = 'stochastic',
363
+ max_resolution: int =1024,
364
  remove_bg: bool = True,
365
  ) -> dict:
366
  """
 
374
  preprocess_image (bool): Whether to preprocess the image.
375
  """
376
  if preprocess_image:
377
+ images = [self.preprocess_image(image, max_resolution, remove_bg=remove_bg) for image in images]
378
  cond = self.get_cond(images)
379
  cond['neg_cond'] = cond['neg_cond'][:1]
380
  torch.manual_seed(seed)
utils/depth_estimation.py CHANGED
@@ -9,7 +9,7 @@ from pathlib import Path
9
  import logging
10
  logging.getLogger("transformers.modeling_utils").setLevel(logging.ERROR)
11
  from utils.image_utils import (
12
- resize_image_with_aspect_ratio
13
  )
14
  from utils.constants import TMPDIR
15
  from easydict import EasyDict as edict
 
9
  import logging
10
  logging.getLogger("transformers.modeling_utils").setLevel(logging.ERROR)
11
  from utils.image_utils import (
12
+ resize_image_with_aspect_ratio, multiply_and_blend_images, open_image
13
  )
14
  from utils.constants import TMPDIR
15
  from easydict import EasyDict as edict
utils/image_utils.py CHANGED
@@ -4,6 +4,7 @@ from io import BytesIO
4
  import cairosvg
5
  import base64
6
  import numpy as np
 
7
  #from decimal import ROUND_CEILING
8
  from PIL import Image, ImageChops, ImageDraw, ImageEnhance, ImageFilter, ImageDraw, ImageOps, ImageMath
9
  from typing import List, Union, is_typeddict
@@ -363,18 +364,15 @@ def resize_image_with_aspect_ratio(image, target_width, target_height):
363
  # Image is taller than target aspect ratio
364
  new_height = target_height
365
  new_width = int(target_height * original_aspect)
366
-
367
  # Resize the image
368
  resized_image = image.resize((new_width, new_height), Image.LANCZOS)
369
  #print(f"Resized size: {resized_image.size}\n")
370
-
371
  # Create a new image with target dimensions and black background
372
  new_image = Image.new("RGB", (target_width, target_height), (0, 0, 0))
373
  # Paste the resized image onto the center of the new image
374
  paste_x = (target_width - new_width) // 2
375
  paste_y = (target_height - new_height) // 2
376
  new_image.paste(resized_image, (paste_x, paste_y))
377
-
378
  return new_image
379
 
380
  def lerp_imagemath(img1, img2, alpha_percent: int = 50):
@@ -463,10 +461,8 @@ def multiply_and_blend_images(base_image, image2, alpha_percent=50):
463
  base_image = base_image.convert('RGBA')
464
  image2 = image2.convert('RGBA')
465
  image2 = image2.resize(base_image.size)
466
-
467
  # Multiply the images
468
  multiplied_image = ImageChops.multiply(base_image, image2)
469
-
470
  # Blend the multiplied result with the original
471
  blended_image = Image.blend(base_image, multiplied_image, alpha)
472
  if name is not None:
@@ -1369,4 +1365,21 @@ def calculate_optimal_fill_dimensions(image: Image.Image):
1369
  width = max(width, BASE_HEIGHT) if width == FIXED_DIMENSION else width
1370
  height = max(height, BASE_HEIGHT) if height == FIXED_DIMENSION else height
1371
 
1372
- return width, height
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  import cairosvg
5
  import base64
6
  import numpy as np
7
+ import rembg
8
  #from decimal import ROUND_CEILING
9
  from PIL import Image, ImageChops, ImageDraw, ImageEnhance, ImageFilter, ImageDraw, ImageOps, ImageMath
10
  from typing import List, Union, is_typeddict
 
364
  # Image is taller than target aspect ratio
365
  new_height = target_height
366
  new_width = int(target_height * original_aspect)
 
367
  # Resize the image
368
  resized_image = image.resize((new_width, new_height), Image.LANCZOS)
369
  #print(f"Resized size: {resized_image.size}\n")
 
370
  # Create a new image with target dimensions and black background
371
  new_image = Image.new("RGB", (target_width, target_height), (0, 0, 0))
372
  # Paste the resized image onto the center of the new image
373
  paste_x = (target_width - new_width) // 2
374
  paste_y = (target_height - new_height) // 2
375
  new_image.paste(resized_image, (paste_x, paste_y))
 
376
  return new_image
377
 
378
  def lerp_imagemath(img1, img2, alpha_percent: int = 50):
 
461
  base_image = base_image.convert('RGBA')
462
  image2 = image2.convert('RGBA')
463
  image2 = image2.resize(base_image.size)
 
464
  # Multiply the images
465
  multiplied_image = ImageChops.multiply(base_image, image2)
 
466
  # Blend the multiplied result with the original
467
  blended_image = Image.blend(base_image, multiplied_image, alpha)
468
  if name is not None:
 
1365
  width = max(width, BASE_HEIGHT) if width == FIXED_DIMENSION else width
1366
  height = max(height, BASE_HEIGHT) if height == FIXED_DIMENSION else height
1367
 
1368
+ return width, height
1369
+
1370
+
1371
+ def combine_depth_map_with_image_path(image_path, depth_map_path, output_path, alpha: int= 95) -> str:
1372
+ image =open_image(image_path)
1373
+ depth_map = open_image(depth_map_path)
1374
+ image = image.resize(depth_map.size)
1375
+ depth_map = depth_map.convert("RGBA")
1376
+ depth_no_background = rembg.remove(depth_map, session = rembg.new_session('u2net'))
1377
+ overlay = Image.blend(image,depth_no_background, alpha= (alpha / 100))
1378
+ overlay.save(output_path)
1379
+ return output_path
1380
+
1381
+ def combine_depth_map_with_Image(image:Image, depth_img:Image, width:int, height:int, alpha: int= 95, rembg_session_name: str ='u2net') -> Image:
1382
+ resized_depth_image = resize_image_with_aspect_ratio(depth_img, width, height)
1383
+ depth_no_background = rembg.remove(resized_depth_image, session = rembg.new_session(rembg_session_name))
1384
+ combined_depth_img = multiply_and_blend_images(image, depth_no_background, (alpha / 100))
1385
+ return combined_depth_img