Style-NeRF2NeRF

3D Style Transfer from Style-Aligned Multi-View Images

SIGGRAPH Asia 2024

Haruo Fujiwara1, Yusuke Mukuta1,2, Tatsuya Harada1,2,
1The University of Tokyo, 2RIKEN
Code coming soon.

Abstract

We propose a simple yet effective pipeline for stylizing a 3D scene, harnessing the power of 2D image diffusion models. Given a NeRF model reconstructed from a set of multi-view images, we perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model. Given a target style prompt, we first generate perceptually similar multi-view images by leveraging a depth-conditioned diffusion model with an attention-sharing mechanism. Next, based on the stylized multi-view images, we propose to guide the style transfer process with the sliced Wasserstein loss based on the feature maps extracted from a pre-trained CNN model. Our pipeline consists of decoupled steps, allowing users to test various prompt ideas and preview the stylized 3D result before proceeding to the NeRF fine-tuning stage. We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.

Video

Coming soon...

Overall Pipeline

Style Blending

Given two different sets of stylized views, one may obtain a style-blended scene by refining the source NeRF model towards the Wasserstein barycenter where t is the blending weight between the two styles.

BibTeX

@inproceedings{fujiwara2024sn2n,
    title     = {Style-NeRF2NeRF: 3D Style Transfer from Style-Aligned Multi-View Images},
    author    = {Haruo Fujiwara and Yusuke Mukuta and Tatsuya Harada},
    booktitle = {SIGGRAPH Asia 2024 Conference Papers},
    year      = {2024}
}