Neural 3D Mesh Renderer

Hiroharu Kato1    Yoshitaka Ushiku1    Tatsuya Harada1,2
1The University of Tokyo    2RIKEN
CVPR 2018 (spotlight)

3D Mesh Reconstruction



2D-to-3D Style Transfer



3D DeepDream



These applications are realized by redefining the “backward pass” of a 3D mesh renderer and incorporating it into neural networks.

Short introduction

We propose Neural Renderer. This is a 3D mesh renderer and able to be integrated into neural networks.

We applied this renderer to (a) 3D mesh reconstruction from a single image and (b) 2D-to-3D image style transfer and 3D DeepDream.

Abstract

For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.

Paper

Full paper is available at https://arxiv.org/abs/1711.07566.

Results

Single-image 3D reconstruction

A 3D mesh can be correctly reconstructed from a single image using our method.

Comparison with voxel-based method [1]

Mesh reconstruction does not suffer from the low-resolution problem and cubic artifacts in voxel reconstruction.

Our approach outperforms the voxel-based approach [1] in 10 out of 13 categories on the voxel IoU metric.

airplanebenchdressercarchairdisplaylampspeakerriflesofatablephonevesselmean
Retrieval-based [1].5564.4875.5713.6519.3512.3958.2905.4600.5133.5314.3097.6696.4078.4766
Voxel-based [1].5556.4924.6823.7123.4494.5395.4223.5868.5987.6221.4938.7504.5507.5736
Mesh-based (ours).6172.4998.7143.7095.4990.5831.4126.6536.6322.6735.4829.7777.5645.6016

2D-to-3D style transfer

The styles of the paintings are accurately transferred to the textures and shapes by our methond. Please pay attention to the outline of the bunny and the lid of the teapot.

The style images are Thomson No. 5 (Yellow Sunset) (D. Coupland, 2011), The Tower of Babel (P. Bruegel the Elder, 1563), The Scream (E. Munch, 1910), and Portrait of Pablo Picasso (J. Gris, 1912).

3D DeepDream

This is a 3D version of DeepDream.

Technical overview

Understanding the 3D world from 2D images is one of the fundamental problems in computer vision. And, rendering (3D-to-2D conversion) lies on the borderline between the 3D world and 2D images. A polygon mesh is an efficient, rich and intuitive 3D representation. Therefore, the “backward pass” of a 3D mesh renderer is worth pursuing.

Rendering cannot be integrated into neural networks without modifications because the back-propagation is prevented from the renderer. In this work, we propose an approximate gradient for rendering, which enables end-to-end training of neural networks including rendering. Please read the paper for the details of our renderer.

The applications demonstrated above were performed using this renderer. The figure below shows the pipelines.

The 3D mesh generator was trained with silhouette images. The generator tries to minimize the difference between the silhouettes of reconstructed 3D shape and true silhouettes in the training phase.

2D-to-3D style transfer was performed by optimizing the shape and texture of a mesh to minimize style loss defined on the images. 3D DeepDream was also performed in a similar way.

Both applications were realized by flowing information in 2D image space into 3D space through our renderer.

More details can be found in the paper.

Code

Citation

@InProceedings{kato2018renderer,
    title={Neural 3D Mesh Renderer},
    author={Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada},
    booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2018}
}

References

  1. X. Yan et al. “Perspective Transformer Nets: Learning Single-view 3D Object Reconstruction without 3D Supervision.” Advances in Neural Information Processing Systems (NIPS). 2016.

Papers that use neural renderer

  1. Weakly-Supervised Domain Adaptation via GAN and Mesh Model for Estimating 3D Hand Poses Interacting Objects [Baek et al. CVPR 2020]
  2. Coherent Reconstruction of Multiple Humans From a Single Image [Jiang et al. CVPR 2020]
  3. End-to-End Optimization of Scene Layout [Luo et al. CVPR 2020]
  4. Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images [Zhou et al. CVPR 2020]
  5. Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild [Wu et al. CVPR 2020]
  6. Leveraging Photometric Consistency over Time for Sparsely Supervised Hand-Object Reconstruction [Hasson et al. CVPR 2020]
  7. End to End Trainable Active Contours via Differentiable Rendering [Gur et al. ICLR 2020]
  8. Neural Puppet: Generative Layered Cartoon Characters [Poursaeed et al. WACV 2020]
  9. Changing clothing on people images using generative adversarial networks [Pozdniakov, Master thesis, Ukrainian Catholic University, 2020]
  10. Semantic Correspondence via 2D-3D-2D Cycle [You et al. arXiv 2020]
  11. Learning Pose-invariant 3D Object Reconstruction from Single-view Images [Peng et al. arXiv 2020]
  12. BCNet: Learning Body and Cloth Shape from A Single Image [Jiang et al. arXiv 2020]
  13. Tackling Two Challenges of 6D Object Pose Estimation: Lack of Real Annotated RGB Images and Scalability to Number of Objects [Sock et al. arXiv 2020]
  14. EllipBody: A Light-weight and Part-based Representation for Human Pose and Shape Recovery [Wang et al. arXiv 2020]
  15. Neural Mesh Refiner for 6-DoF Pose Estimation [Wu et al. arXiv 2020]
  16. Reconstruct, Rasterize and Backprop: Dense shape and pose estimation from a single image [Pokale et al. arXiv 2020]
  17. Adversarial Attacks for Embodied Agents [Liu et al. arXiv 2020]
  18. MeshSDF: Differentiable Iso-Surface Extraction [Remelli et al. arXiv 2020]
  19. Learning View Priors for Single-view 3D Reconstruction [Kato and Harada. CVPR 2019]
  20. Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects [Alcorn et al. CVPR 2019]
  21. MeshAdv: Adversarial Meshes for Visual Recognition [Xiao et al. CVPR 2019]
  22. Pushing the Envelope for RGB-Based Dense 3D Hand Pose Estimation via Neural Rendering [Baek et al. CVPR 2019]
  23. Canonical Surface Mapping via Geometric Cycle Consistency [Kulkarni et al. ICCV 2019]
  24. Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis [Liu et al. ICCV 2019]
  25. Three-D Safari: Learning to Estimate Zebra Pose, Shape, and Texture from Images “In the Wild” [Zuffi et al. ICCV 2019]
  26. End-to-end Hand Mesh Recovery from a Monocular RGB Image [Zhang et al. ICCV 2019]
  27. FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape from Single RGB Images [Zimmermann et al. ICCV 2019]
  28. Localization and Mapping using Instance-specific Mesh Models [Feng et al. IROS 2019]
  29. Human Motion Generation Based on GAN Toward Unsupervised 3D Human Pose Estimation [Yamane et al. ACPR 2019]
  30. Single-image Mesh Reconstruction and Pose Estimation via Generative Normal Map [Xiang et al. CASA 2019]
  31. Towards Analyzing Semantic Robustness of Deep Neural Networks [Abdullah & Ghanem ICCVW 2019]
  32. Lifting AutoEncoders: Unsupervised Learning of a Fully-Disentangled 3D Morphable Model using Deep Non-Rigid Structure from Motion [Sahasrabudhe et al. ICCVW 2019]
  33. TriDepth: Triangular Patch-based Deep Depth Prediction [Kaneko et al. ICCVW 2019]
  34. Transporting Real World Rigid and Articulated Objects into Egocentric VR Experiences [IEEEVR 2019 poster]
  35. Generating 3D Human Animations from Single Monocular Images [Marwah, Master thesis, CMU, 2019]
  36. Self-supervised Learning of 3D Objects from Natural Images [Kato & Harada, arXiv 2019]
  37. STA: Adversarial Attacks on Siamese Trackers [Wu et al. arXiv 2019]
  38. 3D-Aware Scene Manipulation via Inverse Graphics [Yao et al. NIPS 2018]
  39. Learning Category-Specific Mesh Reconstruction from Image Collections [Kanazawa et al. ECCV 2018]

Share

Contact