|
--- |
|
license: apache-2.0 |
|
|
|
tags: |
|
- image-to-3d |
|
--- |
|
|
|
<div align="center"> |
|
|
|
# InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models |
|
|
|
<a href="https://arxiv.org/abs/2404.07191"><img src="https://img.shields.io/badge/ArXiv-2404.07191-brightgreen"></a> |
|
<a href="https://huggingface.co/TencentARC/InstantMesh"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Model_Card-Huggingface-orange"></a> |
|
<a href="https://huggingface.co/spaces/TencentARC/InstantMesh"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Gradio%20Demo-Huggingface-orange"></a> <br> |
|
<a href="https://replicate.com/camenduru/instantmesh"><img src="https://img.shields.io/badge/Demo-Replicate-blue"></a> |
|
<a href="https://colab.research.google.com/github/camenduru/InstantMesh-jupyter/blob/main/InstantMesh_jupyter.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a> |
|
<a href="https://github.com/jtydhr88/ComfyUI-InstantMesh"><img src="https://img.shields.io/badge/Demo-ComfyUI-8A2BE2"></a> |
|
|
|
</div> |
|
|
|
--- |
|
|
|
InstantMesh is a feed-forward framework for efficient 3D mesh generation from a single image based on the [LRM/Instant3D](https://huggingface.co/papers/2311.04400) architecture. |
|
|
|
# βοΈ Dependencies and Installation |
|
|
|
We recommend using `Python>=3.10`, `PyTorch>=2.1.0`, and `CUDA>=12.1`. |
|
```bash |
|
conda create --name instantmesh python=3.10 |
|
conda activate instantmesh |
|
pip install -U pip |
|
|
|
# Ensure Ninja is installed |
|
conda install Ninja |
|
|
|
# Install the correct version of CUDA |
|
conda install cuda -c nvidia/label/cuda-12.1.0 |
|
|
|
# Install PyTorch and xformers |
|
# You may need to install another xformers version if you use a different PyTorch version |
|
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121 |
|
pip install xformers==0.0.22.post7 |
|
|
|
# Install other requirements |
|
pip install -r requirements.txt |
|
``` |
|
|
|
# π« How to Use |
|
|
|
## Download the models |
|
|
|
We provide 4 sparse-view reconstruction model variants and a customized Zero123++ UNet for white-background image generation in the [model card](https://huggingface.co/TencentARC/InstantMesh). |
|
|
|
Our inference script will download the models automatically. Alternatively, you can manually download the models and put them under the `ckpts/` directory. |
|
|
|
By default, we use the `instant-mesh-large` reconstruction model variant. |
|
|
|
## Start a local gradio demo |
|
|
|
To start a gradio demo in your local machine, simply run: |
|
```bash |
|
python app.py |
|
``` |
|
|
|
If you have multiple GPUs in your machine, the demo app will run on two GPUs automatically to save memory. You can also force it to run on a single GPU: |
|
```bash |
|
CUDA_VISIBLE_DEVICES=0 python app.py |
|
``` |
|
|
|
Alternatively, you can run the demo with docker. Please follow the instructions in the [docker](docker/) directory. |
|
|
|
## Running with command line |
|
|
|
To generate 3D meshes from images via command line, simply run: |
|
```bash |
|
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video |
|
``` |
|
|
|
We use [rembg](https://github.com/danielgatis/rembg) to segment the foreground object. If the input image already has an alpha mask, please specify the `no_rembg` flag: |
|
```bash |
|
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video --no_rembg |
|
``` |
|
|
|
By default, our script exports a `.obj` mesh with vertex colors, please specify the `--export_texmap` flag if you hope to export a mesh with a texture map instead (this will cost longer time): |
|
```bash |
|
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video --export_texmap |
|
``` |
|
|
|
Please use a different `.yaml` config file in the [configs](./configs) directory if you hope to use other reconstruction model variants. For example, using the `instant-nerf-large` model for generation: |
|
```bash |
|
python run.py configs/instant-nerf-large.yaml examples/hatsune_miku.png --save_video |
|
``` |
|
**Note:** When using the `NeRF` model variants for image-to-3D generation, exporting a mesh with texture map by specifying `--export_texmap` may cost long time in the UV unwarping step since the default iso-surface extraction resolution is `256`. You can set a lower iso-surface extraction resolution in the config file. |
|
|
|
# π» Training |
|
|
|
We provide our training code to facilitate future research. But we cannot provide the training dataset due to its size. Please refer to our [dataloader](src/data/objaverse.py) for more details. |
|
|
|
To train the sparse-view reconstruction models, please run: |
|
```bash |
|
# Training on NeRF representation |
|
python train.py --base configs/instant-nerf-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1 |
|
|
|
# Training on Mesh representation |
|
python train.py --base configs/instant-mesh-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1 |
|
``` |
|
|
|
We also provide our Zero123++ fine-tuning code since it is frequently requested. The running command is: |
|
```bash |
|
python train.py --base configs/zero123plus-finetune.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1 |
|
``` |
|
|
|
# π Citation |
|
|
|
If you find our work useful for your research or applications, please cite using this BibTeX: |
|
|
|
```BibTeX |
|
@article{xu2024instantmesh, |
|
title={InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models}, |
|
author={Xu, Jiale and Cheng, Weihao and Gao, Yiming and Wang, Xintao and Gao, Shenghua and Shan, Ying}, |
|
journal={arXiv preprint arXiv:2404.07191}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
# π€ Acknowledgements |
|
|
|
We thank the authors of the following projects for their excellent contributions to 3D generative AI! |
|
|
|
- [Zero123++](https://github.com/SUDO-AI-3D/zero123plus) |
|
- [OpenLRM](https://github.com/3DTopia/OpenLRM) |
|
- [FlexiCubes](https://github.com/nv-tlabs/FlexiCubes) |
|
- [Instant3D](https://instant-3d.github.io/) |
|
|
|
Thank [@camenduru](https://github.com/camenduru) for implementing [Replicate Demo](https://replicate.com/camenduru/instantmesh) and [Colab Demo](https://colab.research.google.com/github/camenduru/InstantMesh-jupyter/blob/main/InstantMesh_jupyter.ipynb)! |
|
Thank [@jtydhr88](https://github.com/jtydhr88) for implementing [ComfyUI support](https://github.com/jtydhr88/ComfyUI-InstantMesh)! |