JianyuanWang commited on
Commit
769a73b
·
verified ·
1 Parent(s): 6613a28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -3
README.md CHANGED
@@ -4,6 +4,38 @@ tags:
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Library: [More Information Needed]
9
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
+ <div align="center">
8
+ <h1>VGGT: Visual Geometry Grounded Transformer</h1>
9
+
10
+ <!-- <a href=""><img src='https://img.shields.io/badge/arXiv-VGGT' alt='Paper PDF'></a> -->
11
+ <!-- <a href=''><img src='https://img.shields.io/badge/Project_Page-green' alt='Project Page'></a> -->
12
+ <a href="https://jytime.github.io/data/VGGT_CVPR25.pdf"><img src='https://img.shields.io/badge/Paper-VGGT' alt='Paper PDF'></a>
13
+ <a href='https://huggingface.co/spaces/facebook/vggt'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue'></a>
14
+
15
+
16
+ **[Meta AI Research](https://ai.facebook.com/research/)**; **[University of Oxford, VGG](https://www.robots.ox.ac.uk/~vgg/)**
17
+
18
+
19
+ [Jianyuan Wang](https://jytime.github.io/), [Minghao Chen](https://silent-chen.github.io/), [Nikita Karaev](https://nikitakaraevv.github.io/), [Andrea Vedaldi](https://www.robots.ox.ac.uk/~vedaldi/), [Christian Rupprecht](https://chrirupp.github.io/), [David Novotny](https://d-novotny.github.io/)
20
+ </div>
21
+
22
+ ## Overview
23
+
24
+ Visual Geometry Grounded Transformer (VGGT, CVPR 2025) is a feed-forward neural network that directly infers all key 3D attributes of a scene, including extrinsic and intrinsic camera parameters, point maps, depth maps, and 3D point tracks, **from one, a few, or hundreds of its views, within seconds**.
25
+
26
+ ## Quick Start
27
+
28
+ Please refer to our [Github Repo](https://github.com/facebookresearch/vggt)
29
+
30
+
31
+ ## Citation
32
+ If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work:
33
+
34
+ ```bibtex
35
+ @inproceedings{wang2025vggt,
36
+ title={VGGT: Visual Geometry Grounded Transformer},
37
+ author={Wang, Jianyuan and Chen, Minghao and Karaev, Nikita and Vedaldi, Andrea and Rupprecht, Christian and Novotny, David},
38
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
39
+ year={2025}
40
+ }
41
+ ```