Spaces:
svjack
/
Runtime error

DyrusQZ commited on
Commit
2e859fc
·
1 Parent(s): b1f1770

update debug

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. LHM/__pycache__/__init__.cpython-310.pyc +0 -0
  2. LHM/datasets/__pycache__/__init__.cpython-310.pyc +0 -0
  3. LHM/datasets/__pycache__/cam_utils.cpython-310.pyc +0 -0
  4. LHM/datasets/__pycache__/mixer.cpython-310.pyc +0 -0
  5. LHM/models/__pycache__/ESRGANer_utils.cpython-310.pyc +0 -0
  6. LHM/models/__pycache__/__init__.cpython-310.pyc +0 -0
  7. LHM/models/__pycache__/arcface_utils.cpython-310.pyc +0 -0
  8. LHM/models/__pycache__/embedder.cpython-310.pyc +0 -0
  9. LHM/models/__pycache__/modeling_human_lrm.cpython-310.pyc +0 -0
  10. LHM/models/__pycache__/transformer.cpython-310.pyc +0 -0
  11. LHM/models/__pycache__/transformer_dit.cpython-310.pyc +0 -0
  12. LHM/models/__pycache__/utils.cpython-310.pyc +0 -0
  13. LHM/models/encoders/__pycache__/__init__.cpython-310.pyc +0 -0
  14. LHM/models/encoders/__pycache__/dinov2_fusion_wrapper.cpython-310.pyc +0 -0
  15. LHM/models/encoders/__pycache__/sapiens_warpper.cpython-310.pyc +0 -0
  16. LHM/models/encoders/dinov2/__pycache__/__init__.cpython-310.pyc +0 -0
  17. LHM/models/encoders/dinov2/hub/__pycache__/__init__.cpython-310.pyc +0 -0
  18. LHM/models/encoders/dinov2/hub/__pycache__/backbones.cpython-310.pyc +0 -0
  19. LHM/models/encoders/dinov2/hub/__pycache__/utils.cpython-310.pyc +0 -0
  20. LHM/models/encoders/dinov2/layers/__pycache__/__init__.cpython-310.pyc +0 -0
  21. LHM/models/encoders/dinov2/layers/__pycache__/attention.cpython-310.pyc +0 -0
  22. LHM/models/encoders/dinov2/layers/__pycache__/block.cpython-310.pyc +0 -0
  23. LHM/models/encoders/dinov2/layers/__pycache__/dino_head.cpython-310.pyc +0 -0
  24. LHM/models/encoders/dinov2/layers/__pycache__/drop_path.cpython-310.pyc +0 -0
  25. LHM/models/encoders/dinov2/layers/__pycache__/layer_scale.cpython-310.pyc +0 -0
  26. LHM/models/encoders/dinov2/layers/__pycache__/mlp.cpython-310.pyc +0 -0
  27. LHM/models/encoders/dinov2/layers/__pycache__/patch_embed.cpython-310.pyc +0 -0
  28. LHM/models/encoders/dinov2/layers/__pycache__/swiglu_ffn.cpython-310.pyc +0 -0
  29. LHM/models/encoders/dinov2/models/__pycache__/__init__.cpython-310.pyc +0 -0
  30. LHM/models/encoders/dinov2/models/__pycache__/vision_transformer.cpython-310.pyc +0 -0
  31. LHM/models/encoders/dinov2_fusion_wrapper.py +4 -4
  32. LHM/models/encoders/sapiens_warpper.py +4 -4
  33. LHM/models/modeling_human_lrm.py +18 -18
  34. LHM/models/rendering/__pycache__/__init__.cpython-310.pyc +0 -0
  35. LHM/models/rendering/__pycache__/gs_renderer.cpython-310.pyc +0 -0
  36. LHM/models/rendering/__pycache__/gsplat_renderer.cpython-310.pyc +0 -0
  37. LHM/models/rendering/__pycache__/mesh_utils.cpython-310.pyc +0 -0
  38. LHM/models/rendering/__pycache__/smpl_x.cpython-310.pyc +0 -0
  39. LHM/models/rendering/__pycache__/smpl_x_voxel_dense_sampling.cpython-310.pyc +0 -0
  40. LHM/models/rendering/__pycache__/synthesizer.cpython-310.pyc +0 -0
  41. LHM/models/rendering/smpl_x_voxel_dense_sampling.py +29 -15
  42. LHM/models/rendering/utils/__pycache__/__init__.cpython-310.pyc +0 -0
  43. LHM/models/rendering/utils/__pycache__/math_utils.cpython-310.pyc +0 -0
  44. LHM/models/rendering/utils/__pycache__/ray_marcher.cpython-310.pyc +0 -0
  45. LHM/models/rendering/utils/__pycache__/ray_sampler.cpython-310.pyc +0 -0
  46. LHM/models/rendering/utils/__pycache__/renderer.cpython-310.pyc +0 -0
  47. LHM/models/rendering/utils/__pycache__/sh_utils.cpython-310.pyc +0 -0
  48. LHM/models/rendering/utils/__pycache__/typing.cpython-310.pyc +0 -0
  49. LHM/models/rendering/utils/__pycache__/utils.cpython-310.pyc +0 -0
  50. LHM/models/transformer.py +10 -10
LHM/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/__pycache__/__init__.cpython-310.pyc and b/LHM/__pycache__/__init__.cpython-310.pyc differ
 
LHM/datasets/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/datasets/__pycache__/__init__.cpython-310.pyc and b/LHM/datasets/__pycache__/__init__.cpython-310.pyc differ
 
LHM/datasets/__pycache__/cam_utils.cpython-310.pyc CHANGED
Binary files a/LHM/datasets/__pycache__/cam_utils.cpython-310.pyc and b/LHM/datasets/__pycache__/cam_utils.cpython-310.pyc differ
 
LHM/datasets/__pycache__/mixer.cpython-310.pyc CHANGED
Binary files a/LHM/datasets/__pycache__/mixer.cpython-310.pyc and b/LHM/datasets/__pycache__/mixer.cpython-310.pyc differ
 
LHM/models/__pycache__/ESRGANer_utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/ESRGANer_utils.cpython-310.pyc and b/LHM/models/__pycache__/ESRGANer_utils.cpython-310.pyc differ
 
LHM/models/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/__init__.cpython-310.pyc and b/LHM/models/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/__pycache__/arcface_utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/arcface_utils.cpython-310.pyc and b/LHM/models/__pycache__/arcface_utils.cpython-310.pyc differ
 
LHM/models/__pycache__/embedder.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/embedder.cpython-310.pyc and b/LHM/models/__pycache__/embedder.cpython-310.pyc differ
 
LHM/models/__pycache__/modeling_human_lrm.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/modeling_human_lrm.cpython-310.pyc and b/LHM/models/__pycache__/modeling_human_lrm.cpython-310.pyc differ
 
LHM/models/__pycache__/transformer.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/transformer.cpython-310.pyc and b/LHM/models/__pycache__/transformer.cpython-310.pyc differ
 
LHM/models/__pycache__/transformer_dit.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/transformer_dit.cpython-310.pyc and b/LHM/models/__pycache__/transformer_dit.cpython-310.pyc differ
 
LHM/models/__pycache__/utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/__pycache__/utils.cpython-310.pyc and b/LHM/models/__pycache__/utils.cpython-310.pyc differ
 
LHM/models/encoders/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/__pycache__/__init__.cpython-310.pyc and b/LHM/models/encoders/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/encoders/__pycache__/dinov2_fusion_wrapper.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/__pycache__/dinov2_fusion_wrapper.cpython-310.pyc and b/LHM/models/encoders/__pycache__/dinov2_fusion_wrapper.cpython-310.pyc differ
 
LHM/models/encoders/__pycache__/sapiens_warpper.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/__pycache__/sapiens_warpper.cpython-310.pyc and b/LHM/models/encoders/__pycache__/sapiens_warpper.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/__pycache__/__init__.cpython-310.pyc and b/LHM/models/encoders/dinov2/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/hub/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/hub/__pycache__/__init__.cpython-310.pyc and b/LHM/models/encoders/dinov2/hub/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/hub/__pycache__/backbones.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/hub/__pycache__/backbones.cpython-310.pyc and b/LHM/models/encoders/dinov2/hub/__pycache__/backbones.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/hub/__pycache__/utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/hub/__pycache__/utils.cpython-310.pyc and b/LHM/models/encoders/dinov2/hub/__pycache__/utils.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/__init__.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/attention.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/attention.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/attention.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/block.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/block.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/block.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/dino_head.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/dino_head.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/dino_head.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/drop_path.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/drop_path.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/drop_path.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/layer_scale.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/layer_scale.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/layer_scale.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/mlp.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/mlp.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/mlp.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/patch_embed.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/patch_embed.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/patch_embed.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/layers/__pycache__/swiglu_ffn.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/layers/__pycache__/swiglu_ffn.cpython-310.pyc and b/LHM/models/encoders/dinov2/layers/__pycache__/swiglu_ffn.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/models/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/models/__pycache__/__init__.cpython-310.pyc and b/LHM/models/encoders/dinov2/models/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/encoders/dinov2/models/__pycache__/vision_transformer.cpython-310.pyc CHANGED
Binary files a/LHM/models/encoders/dinov2/models/__pycache__/vision_transformer.cpython-310.pyc and b/LHM/models/encoders/dinov2/models/__pycache__/vision_transformer.cpython-310.pyc differ
 
LHM/models/encoders/dinov2_fusion_wrapper.py CHANGED
@@ -19,9 +19,9 @@ import kornia
19
  import torch
20
  import torch.nn as nn
21
  import torch.nn.functional as F
22
- from accelerate.logging import get_logger
23
 
24
- logger = get_logger(__name__)
25
 
26
 
27
  class DPTHead(nn.Module):
@@ -126,7 +126,7 @@ class Dinov2FusionWrapper(nn.Module):
126
  self._freeze()
127
 
128
  def _freeze(self):
129
- logger.warning(f"======== Freezing Dinov2FusionWrapper ========")
130
  self.model.eval()
131
  for name, param in self.model.named_parameters():
132
  param.requires_grad = False
@@ -170,7 +170,7 @@ class Dinov2FusionWrapper(nn.Module):
170
 
171
  dinov2_hub = import_module(".dinov2.hub.backbones", package=__package__)
172
  model_fn = getattr(dinov2_hub, model_name)
173
- logger.debug(f"Modulation dim for Dinov2 is {modulation_dim}.")
174
  model = model_fn(modulation_dim=modulation_dim, pretrained=pretrained)
175
  return model
176
 
 
19
  import torch
20
  import torch.nn as nn
21
  import torch.nn.functional as F
22
+ # from accelerate.logging import get_logger
23
 
24
+ # logger = get_logger(__name__)
25
 
26
 
27
  class DPTHead(nn.Module):
 
126
  self._freeze()
127
 
128
  def _freeze(self):
129
+ # logger.warning(f"======== Freezing Dinov2FusionWrapper ========")
130
  self.model.eval()
131
  for name, param in self.model.named_parameters():
132
  param.requires_grad = False
 
170
 
171
  dinov2_hub = import_module(".dinov2.hub.backbones", package=__package__)
172
  model_fn = getattr(dinov2_hub, model_name)
173
+ # logger.debug(f"Modulation dim for Dinov2 is {modulation_dim}.")
174
  model = model_fn(modulation_dim=modulation_dim, pretrained=pretrained)
175
  return model
176
 
LHM/models/encoders/sapiens_warpper.py CHANGED
@@ -18,10 +18,10 @@ import torch
18
  import torch.nn as nn
19
  import torch.nn.functional as F
20
  import torchvision
21
- from accelerate.logging import get_logger
22
  from tqdm import tqdm
23
 
24
- logger = get_logger(__name__)
25
 
26
  timings = {}
27
  BATCH_SIZE = 64
@@ -188,7 +188,7 @@ class SapiensWrapper(nn.Module):
188
  @staticmethod
189
  def _build_sapiens(model_name: str, pretrained: bool = True):
190
 
191
- logger.debug(f"Using Sapiens model: {model_name}")
192
  USE_TORCHSCRIPT = "_torchscript" in model_name
193
 
194
  # build the model from a checkpoint file
@@ -201,7 +201,7 @@ class SapiensWrapper(nn.Module):
201
  return model
202
 
203
  def _freeze(self):
204
- logger.warning(f"======== Freezing Sapiens Model ========")
205
  self.model.eval()
206
  for name, param in self.model.named_parameters():
207
  param.requires_grad = False
 
18
  import torch.nn as nn
19
  import torch.nn.functional as F
20
  import torchvision
21
+ # from accelerate.logging import get_logger
22
  from tqdm import tqdm
23
 
24
+ # logger = get_logger(__name__)
25
 
26
  timings = {}
27
  BATCH_SIZE = 64
 
188
  @staticmethod
189
  def _build_sapiens(model_name: str, pretrained: bool = True):
190
 
191
+ # logger.debug(f"Using Sapiens model: {model_name}")
192
  USE_TORCHSCRIPT = "_torchscript" in model_name
193
 
194
  # build the model from a checkpoint file
 
201
  return model
202
 
203
  def _freeze(self):
204
+ # logger.warning(f"======== Freezing Sapiens Model ========")
205
  self.model.eval()
206
  for name, param in self.model.named_parameters():
207
  param.requires_grad = False
LHM/models/modeling_human_lrm.py CHANGED
@@ -14,7 +14,7 @@ import numpy as np
14
  import torch
15
  import torch.nn as nn
16
  import torch.nn.functional as F
17
- from accelerate.logging import get_logger
18
  from diffusers.utils import is_torch_version
19
 
20
  from LHM.models.arcface_utils import ResNetArcFace
@@ -29,7 +29,7 @@ from .embedder import CameraEmbedder
29
  from .rendering.synthesizer import TriplaneSynthesizer
30
  from .transformer import TransformerDecoder
31
 
32
- logger = get_logger(__name__)
33
 
34
 
35
  class ModelHumanLRM(nn.Module):
@@ -212,42 +212,42 @@ class ModelHumanLRM(nn.Module):
212
  if encoder_type == "dino":
213
  from .encoders.dino_wrapper import DinoWrapper
214
 
215
- logger.info("Using DINO as the encoder")
216
  return DinoWrapper
217
  elif encoder_type == "dinov2":
218
  from .encoders.dinov2_wrapper import Dinov2Wrapper
219
 
220
- logger.info("Using DINOv2 as the encoder")
221
  return Dinov2Wrapper
222
  elif encoder_type == "dinov2_unet":
223
  from .encoders.dinov2_unet_wrapper import Dinov2UnetWrapper
224
 
225
- logger.info("Using Dinov2Unet as the encoder")
226
  return Dinov2UnetWrapper
227
  elif encoder_type == "resunet":
228
  from .encoders.xunet_wrapper import XnetWrapper
229
 
230
- logger.info("Using XnetWrapper as the encoder")
231
  return XnetWrapper
232
  elif encoder_type == "dinov2_featup":
233
  from .encoders.dinov2_featup_wrapper import Dinov2FeatUpWrapper
234
 
235
- logger.info("Using Dinov2FeatUpWrapper as the encoder")
236
  return Dinov2FeatUpWrapper
237
  elif encoder_type == "dinov2_dpt":
238
  from .encoders.dinov2_dpt_wrapper import Dinov2DPTWrapper
239
 
240
- logger.info("Using Dinov2DPTWrapper as the encoder")
241
  return Dinov2DPTWrapper
242
  elif encoder_type == "dinov2_fusion":
243
  from .encoders.dinov2_fusion_wrapper import Dinov2FusionWrapper
244
 
245
- logger.info("Using Dinov2FusionWrapper as the encoder")
246
  return Dinov2FusionWrapper
247
  elif encoder_type == "sapiens":
248
  from .encoders.sapiens_warpper import SapiensWrapper
249
 
250
- logger.info("Using Sapiens as the encoder")
251
  return SapiensWrapper
252
 
253
  def forward_transformer(self, image_feats, camera_embeddings, query_points):
@@ -560,10 +560,10 @@ class ModelHumanLRM(nn.Module):
560
  },
561
  ]
562
 
563
- logger.info("======== Weight Decay Parameters ========")
564
- logger.info(f"Total: {len(decay_params)}")
565
- logger.info("======== No Weight Decay Parameters ========")
566
- logger.info(f"Total: {len(no_decay_params)}")
567
 
568
  print(f"Total Params: {len(no_decay_params) + len(decay_params)}")
569
 
@@ -936,10 +936,10 @@ class ModelHumanLRMSapdinoBodyHeadSD3_5(ModelHumanLRMSapdinoBodyHeadSD3):
936
  },
937
  ]
938
 
939
- logger.info("======== Weight Decay Parameters ========")
940
- logger.info(f"Total: {len(decay_params)}")
941
- logger.info("======== No Weight Decay Parameters ========")
942
- logger.info(f"Total: {len(no_decay_params)}")
943
 
944
  print(f"Total Params: {len(no_decay_params) + len(decay_params)}")
945
 
 
14
  import torch
15
  import torch.nn as nn
16
  import torch.nn.functional as F
17
+ # from accelerate.logging import get_logger
18
  from diffusers.utils import is_torch_version
19
 
20
  from LHM.models.arcface_utils import ResNetArcFace
 
29
  from .rendering.synthesizer import TriplaneSynthesizer
30
  from .transformer import TransformerDecoder
31
 
32
+ # logger = get_logger(__name__)
33
 
34
 
35
  class ModelHumanLRM(nn.Module):
 
212
  if encoder_type == "dino":
213
  from .encoders.dino_wrapper import DinoWrapper
214
 
215
+ # logger.info("Using DINO as the encoder")
216
  return DinoWrapper
217
  elif encoder_type == "dinov2":
218
  from .encoders.dinov2_wrapper import Dinov2Wrapper
219
 
220
+ # logger.info("Using DINOv2 as the encoder")
221
  return Dinov2Wrapper
222
  elif encoder_type == "dinov2_unet":
223
  from .encoders.dinov2_unet_wrapper import Dinov2UnetWrapper
224
 
225
+ # logger.info("Using Dinov2Unet as the encoder")
226
  return Dinov2UnetWrapper
227
  elif encoder_type == "resunet":
228
  from .encoders.xunet_wrapper import XnetWrapper
229
 
230
+ # logger.info("Using XnetWrapper as the encoder")
231
  return XnetWrapper
232
  elif encoder_type == "dinov2_featup":
233
  from .encoders.dinov2_featup_wrapper import Dinov2FeatUpWrapper
234
 
235
+ # logger.info("Using Dinov2FeatUpWrapper as the encoder")
236
  return Dinov2FeatUpWrapper
237
  elif encoder_type == "dinov2_dpt":
238
  from .encoders.dinov2_dpt_wrapper import Dinov2DPTWrapper
239
 
240
+ # logger.info("Using Dinov2DPTWrapper as the encoder")
241
  return Dinov2DPTWrapper
242
  elif encoder_type == "dinov2_fusion":
243
  from .encoders.dinov2_fusion_wrapper import Dinov2FusionWrapper
244
 
245
+ # logger.info("Using Dinov2FusionWrapper as the encoder")
246
  return Dinov2FusionWrapper
247
  elif encoder_type == "sapiens":
248
  from .encoders.sapiens_warpper import SapiensWrapper
249
 
250
+ # logger.info("Using Sapiens as the encoder")
251
  return SapiensWrapper
252
 
253
  def forward_transformer(self, image_feats, camera_embeddings, query_points):
 
560
  },
561
  ]
562
 
563
+ # logger.info("======== Weight Decay Parameters ========")
564
+ # logger.info(f"Total: {len(decay_params)}")
565
+ # logger.info("======== No Weight Decay Parameters ========")
566
+ # logger.info(f"Total: {len(no_decay_params)}")
567
 
568
  print(f"Total Params: {len(no_decay_params) + len(decay_params)}")
569
 
 
936
  },
937
  ]
938
 
939
+ # logger.info("======== Weight Decay Parameters ========")
940
+ # logger.info(f"Total: {len(decay_params)}")
941
+ # logger.info("======== No Weight Decay Parameters ========")
942
+ # logger.info(f"Total: {len(no_decay_params)}")
943
 
944
  print(f"Total Params: {len(no_decay_params) + len(decay_params)}")
945
 
LHM/models/rendering/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/__pycache__/__init__.cpython-310.pyc and b/LHM/models/rendering/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/rendering/__pycache__/gs_renderer.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/__pycache__/gs_renderer.cpython-310.pyc and b/LHM/models/rendering/__pycache__/gs_renderer.cpython-310.pyc differ
 
LHM/models/rendering/__pycache__/gsplat_renderer.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/__pycache__/gsplat_renderer.cpython-310.pyc and b/LHM/models/rendering/__pycache__/gsplat_renderer.cpython-310.pyc differ
 
LHM/models/rendering/__pycache__/mesh_utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/__pycache__/mesh_utils.cpython-310.pyc and b/LHM/models/rendering/__pycache__/mesh_utils.cpython-310.pyc differ
 
LHM/models/rendering/__pycache__/smpl_x.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/__pycache__/smpl_x.cpython-310.pyc and b/LHM/models/rendering/__pycache__/smpl_x.cpython-310.pyc differ
 
LHM/models/rendering/__pycache__/smpl_x_voxel_dense_sampling.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/__pycache__/smpl_x_voxel_dense_sampling.cpython-310.pyc and b/LHM/models/rendering/__pycache__/smpl_x_voxel_dense_sampling.cpython-310.pyc differ
 
LHM/models/rendering/__pycache__/synthesizer.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/__pycache__/synthesizer.cpython-310.pyc and b/LHM/models/rendering/__pycache__/synthesizer.cpython-310.pyc differ
 
LHM/models/rendering/smpl_x_voxel_dense_sampling.py CHANGED
@@ -318,8 +318,10 @@ class SMPLX_Mesh(object):
318
  return joint_offset
319
 
320
  def get_subdivider(self, subdivide_num):
321
- vert = self.layer["neutral"].v_template.float().cuda()
322
- face = torch.LongTensor(self.face).cuda()
 
 
323
  mesh = Meshes(vert[None, :, :], face[None, :, :])
324
 
325
  if subdivide_num > 0:
@@ -419,7 +421,8 @@ class SMPLX_Mesh(object):
419
  normal = (
420
  Meshes(
421
  verts=mesh_neutral_pose[None, :, :],
422
- faces=torch.LongTensor(self.face_upsampled).cuda()[None, :, :],
 
423
  )
424
  .verts_normals_packed()
425
  .reshape(self.vertex_num_upsampled, 3)
@@ -537,11 +540,14 @@ class SMPLXVoxelMeshModel(nn.Module):
537
  ):
538
  """Smooth KNN to handle skirt deformation."""
539
 
540
- lbs_weights = lbs_weights.cuda()
 
541
 
542
  dist = knn_points(
543
- voxel_v.unsqueeze(0).cuda(),
544
- template_v.unsqueeze(0).cuda(),
 
 
545
  K=1,
546
  return_nn=True,
547
  )
@@ -555,8 +561,10 @@ class SMPLXVoxelMeshModel(nn.Module):
555
  # Smooth Skinning
556
 
557
  knn_dis = knn_points(
558
- voxel_v.unsqueeze(0).cuda(),
559
- voxel_v.unsqueeze(0).cuda(),
 
 
560
  K=smooth_k + 1,
561
  return_nn=True,
562
  )
@@ -667,7 +675,7 @@ class SMPLXVoxelMeshModel(nn.Module):
667
  )
668
 
669
  coordinates = coordinates.view(-1, 3).float()
670
- coordinates = coordinates.cuda()
671
 
672
  if os.path.exists(f"./pretrained_models/voxel_grid/voxel_{voxel_size}.pth"):
673
  print(f"load voxel_grid voxel_{voxel_size}.pth")
@@ -722,12 +730,15 @@ class SMPLXVoxelMeshModel(nn.Module):
722
  smpl_x = self.smpl_x
723
 
724
  # using KNN to query subdivided mesh
725
- dense_pts = self.dense_pts.cuda()
 
726
  template_verts = self.smplx_layer.v_template
727
 
728
  nn_vertex_idxs = knn_points(
729
- dense_pts.unsqueeze(0).cuda(),
730
- template_verts.unsqueeze(0).cuda(),
 
 
731
  K=1,
732
  return_nn=True,
733
  ).idx
@@ -1046,7 +1057,8 @@ class SMPLXVoxelMeshModel(nn.Module):
1046
  ) # [B, 54, 3]
1047
  # smplx pose-dependent vertex offset
1048
  pose = (
1049
- axis_angle_to_matrix(pose) - torch.eye(3)[None, None, :, :].float().cuda()
 
1050
  ).view(batch_size, (self.smpl_x.joint_num - 1) * 9)
1051
  # (B, 54 * 9) x (54*9, V)
1052
 
@@ -1499,7 +1511,8 @@ def read_smplx_param(smplx_data_root, shape_param_file, batch_size=1, device="cu
1499
  osp.join(data_root_path, "cam_params", str(frame_idx) + ".json")
1500
  ) as f:
1501
  cam_param = {
1502
- k: torch.FloatTensor(v).cuda() for k, v in json.load(f).items()
 
1503
  }
1504
  cam_param_list.append(cam_param)
1505
 
@@ -1668,7 +1681,8 @@ def generate_smplx_point():
1668
  for k, v in data.items():
1669
  if k in smplx_keys:
1670
  # print(k, v.shape)
1671
- smplx_params[k] = data[k].unsqueeze(0).cuda()
 
1672
  return smplx_params
1673
 
1674
  def sample_one(data):
 
318
  return joint_offset
319
 
320
  def get_subdivider(self, subdivide_num):
321
+ # vert = self.layer["neutral"].v_template.float().cuda()
322
+ # face = torch.LongTensor(self.face).cuda()
323
+ vert = self.layer["neutral"].v_template.float()
324
+ face = torch.LongTensor(self.face)
325
  mesh = Meshes(vert[None, :, :], face[None, :, :])
326
 
327
  if subdivide_num > 0:
 
421
  normal = (
422
  Meshes(
423
  verts=mesh_neutral_pose[None, :, :],
424
+ # faces=torch.LongTensor(self.face_upsampled).cuda()[None, :, :],
425
+ faces=torch.LongTensor(self.face_upsampled)[None, :, :],
426
  )
427
  .verts_normals_packed()
428
  .reshape(self.vertex_num_upsampled, 3)
 
540
  ):
541
  """Smooth KNN to handle skirt deformation."""
542
 
543
+ # lbs_weights = lbs_weights.cuda()
544
+ lbs_weights = lbs_weights
545
 
546
  dist = knn_points(
547
+ # voxel_v.unsqueeze(0).cuda(),
548
+ # template_v.unsqueeze(0).cuda(),
549
+ voxel_v.unsqueeze(0),
550
+ template_v.unsqueeze(0),
551
  K=1,
552
  return_nn=True,
553
  )
 
561
  # Smooth Skinning
562
 
563
  knn_dis = knn_points(
564
+ # voxel_v.unsqueeze(0).cuda(),
565
+ # voxel_v.unsqueeze(0).cuda(),
566
+ voxel_v.unsqueeze(0),
567
+ voxel_v.unsqueeze(0),
568
  K=smooth_k + 1,
569
  return_nn=True,
570
  )
 
675
  )
676
 
677
  coordinates = coordinates.view(-1, 3).float()
678
+ # coordinates = coordinates.cuda()
679
 
680
  if os.path.exists(f"./pretrained_models/voxel_grid/voxel_{voxel_size}.pth"):
681
  print(f"load voxel_grid voxel_{voxel_size}.pth")
 
730
  smpl_x = self.smpl_x
731
 
732
  # using KNN to query subdivided mesh
733
+ # dense_pts = self.dense_pts.cuda()
734
+ dense_pts = self.dense_pts
735
  template_verts = self.smplx_layer.v_template
736
 
737
  nn_vertex_idxs = knn_points(
738
+ # dense_pts.unsqueeze(0).cuda(),
739
+ # template_verts.unsqueeze(0).cuda(),
740
+ dense_pts.unsqueeze(0),
741
+ template_verts.unsqueeze(0),
742
  K=1,
743
  return_nn=True,
744
  ).idx
 
1057
  ) # [B, 54, 3]
1058
  # smplx pose-dependent vertex offset
1059
  pose = (
1060
+ # axis_angle_to_matrix(pose) - torch.eye(3)[None, None, :, :].float().cuda()
1061
+ axis_angle_to_matrix(pose) - torch.eye(3)[None, None, :, :].float()
1062
  ).view(batch_size, (self.smpl_x.joint_num - 1) * 9)
1063
  # (B, 54 * 9) x (54*9, V)
1064
 
 
1511
  osp.join(data_root_path, "cam_params", str(frame_idx) + ".json")
1512
  ) as f:
1513
  cam_param = {
1514
+ # k: torch.FloatTensor(v).cuda() for k, v in json.load(f).items()
1515
+ k: torch.FloatTensor(v) for k, v in json.load(f).items()
1516
  }
1517
  cam_param_list.append(cam_param)
1518
 
 
1681
  for k, v in data.items():
1682
  if k in smplx_keys:
1683
  # print(k, v.shape)
1684
+ # smplx_params[k] = data[k].unsqueeze(0).cuda()
1685
+ smplx_params[k] = data[k].unsqueeze(0)
1686
  return smplx_params
1687
 
1688
  def sample_one(data):
LHM/models/rendering/utils/__pycache__/__init__.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/__init__.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/__init__.cpython-310.pyc differ
 
LHM/models/rendering/utils/__pycache__/math_utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/math_utils.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/math_utils.cpython-310.pyc differ
 
LHM/models/rendering/utils/__pycache__/ray_marcher.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/ray_marcher.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/ray_marcher.cpython-310.pyc differ
 
LHM/models/rendering/utils/__pycache__/ray_sampler.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/ray_sampler.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/ray_sampler.cpython-310.pyc differ
 
LHM/models/rendering/utils/__pycache__/renderer.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/renderer.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/renderer.cpython-310.pyc differ
 
LHM/models/rendering/utils/__pycache__/sh_utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/sh_utils.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/sh_utils.cpython-310.pyc differ
 
LHM/models/rendering/utils/__pycache__/typing.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/typing.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/typing.cpython-310.pyc differ
 
LHM/models/rendering/utils/__pycache__/utils.cpython-310.pyc CHANGED
Binary files a/LHM/models/rendering/utils/__pycache__/utils.cpython-310.pyc and b/LHM/models/rendering/utils/__pycache__/utils.cpython-310.pyc differ
 
LHM/models/transformer.py CHANGED
@@ -11,10 +11,10 @@ from typing import Any, Dict, Optional, Tuple, Union
11
 
12
  import torch
13
  import torch.nn as nn
14
- from accelerate.logging import get_logger
15
  from diffusers.utils import is_torch_version
16
 
17
- logger = get_logger(__name__)
18
 
19
 
20
  class TransformerDecoder(nn.Module):
@@ -106,7 +106,7 @@ class TransformerDecoder(nn.Module):
106
  ), f"Condition and modulation are not supported for BasicBlock"
107
  from .block import BasicBlock
108
 
109
- logger.debug(f"Using BasicBlock")
110
  return partial(BasicBlock, inner_dim=inner_dim)
111
  elif self.block_type == "cond":
112
  assert (
@@ -117,10 +117,10 @@ class TransformerDecoder(nn.Module):
117
  ), f"Modulation dimension is not supported for ConditionBlock"
118
  from .block import ConditionBlock
119
 
120
- logger.debug(f"Using ConditionBlock")
121
  return partial(ConditionBlock, inner_dim=inner_dim, cond_dim=cond_dim)
122
  elif self.block_type == "mod":
123
- logger.error(f"modulation without condition is not implemented")
124
  raise NotImplementedError(
125
  f"modulation without condition is not implemented"
126
  )
@@ -130,7 +130,7 @@ class TransformerDecoder(nn.Module):
130
  ), f"Condition and modulation dimensions must be specified for ConditionModulationBlock"
131
  from .block import ConditionModulationBlock
132
 
133
- logger.debug(f"Using ConditionModulationBlock")
134
  return partial(
135
  ConditionModulationBlock,
136
  inner_dim=inner_dim,
@@ -138,25 +138,25 @@ class TransformerDecoder(nn.Module):
138
  mod_dim=mod_dim,
139
  )
140
  elif self.block_type == "cogvideo_cond":
141
- logger.debug(f"Using CogVideoXBlock")
142
  from LHM.models.transformer_dit import CogVideoXBlock
143
 
144
  # assert inner_dim == cond_dim, f"inner_dim:{inner_dim}, cond_dim:{cond_dim}"
145
  return partial(CogVideoXBlock, dim=inner_dim, attention_bias=True)
146
  elif self.block_type == "sd3_cond":
147
- logger.debug(f"Using SD3JointTransformerBlock")
148
  from LHM.models.transformer_dit import SD3JointTransformerBlock
149
 
150
  return partial(SD3JointTransformerBlock, dim=inner_dim, qk_norm="rms_norm")
151
  elif self.block_type == "sd3_mm_cond":
152
- logger.debug(f"Using SD3MMJointTransformerBlock")
153
  from LHM.models.transformer_dit import SD3MMJointTransformerBlock
154
 
155
  return partial(
156
  SD3MMJointTransformerBlock, dim=inner_dim, qk_norm="rms_norm"
157
  )
158
  elif self.block_type == "sd3_mm_bh_cond":
159
- logger.debug(f"Using SD3MMJointTransformerBlock")
160
  from LHM.models.transformer_dit import SD3BodyHeadMMJointTransformerBlock
161
 
162
  return partial(
 
11
 
12
  import torch
13
  import torch.nn as nn
14
+ # from accelerate.logging import get_logger
15
  from diffusers.utils import is_torch_version
16
 
17
+ # logger = get_logger(__name__)
18
 
19
 
20
  class TransformerDecoder(nn.Module):
 
106
  ), f"Condition and modulation are not supported for BasicBlock"
107
  from .block import BasicBlock
108
 
109
+ # logger.debug(f"Using BasicBlock")
110
  return partial(BasicBlock, inner_dim=inner_dim)
111
  elif self.block_type == "cond":
112
  assert (
 
117
  ), f"Modulation dimension is not supported for ConditionBlock"
118
  from .block import ConditionBlock
119
 
120
+ # logger.debug(f"Using ConditionBlock")
121
  return partial(ConditionBlock, inner_dim=inner_dim, cond_dim=cond_dim)
122
  elif self.block_type == "mod":
123
+ # logger.error(f"modulation without condition is not implemented")
124
  raise NotImplementedError(
125
  f"modulation without condition is not implemented"
126
  )
 
130
  ), f"Condition and modulation dimensions must be specified for ConditionModulationBlock"
131
  from .block import ConditionModulationBlock
132
 
133
+ # logger.debug(f"Using ConditionModulationBlock")
134
  return partial(
135
  ConditionModulationBlock,
136
  inner_dim=inner_dim,
 
138
  mod_dim=mod_dim,
139
  )
140
  elif self.block_type == "cogvideo_cond":
141
+ # logger.debug(f"Using CogVideoXBlock")
142
  from LHM.models.transformer_dit import CogVideoXBlock
143
 
144
  # assert inner_dim == cond_dim, f"inner_dim:{inner_dim}, cond_dim:{cond_dim}"
145
  return partial(CogVideoXBlock, dim=inner_dim, attention_bias=True)
146
  elif self.block_type == "sd3_cond":
147
+ # logger.debug(f"Using SD3JointTransformerBlock")
148
  from LHM.models.transformer_dit import SD3JointTransformerBlock
149
 
150
  return partial(SD3JointTransformerBlock, dim=inner_dim, qk_norm="rms_norm")
151
  elif self.block_type == "sd3_mm_cond":
152
+ # logger.debug(f"Using SD3MMJointTransformerBlock")
153
  from LHM.models.transformer_dit import SD3MMJointTransformerBlock
154
 
155
  return partial(
156
  SD3MMJointTransformerBlock, dim=inner_dim, qk_norm="rms_norm"
157
  )
158
  elif self.block_type == "sd3_mm_bh_cond":
159
+ # logger.debug(f"Using SD3MMJointTransformerBlock")
160
  from LHM.models.transformer_dit import SD3BodyHeadMMJointTransformerBlock
161
 
162
  return partial(