AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (24.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Neural scene baking for permutation invariant transparency rendering with real-time global illumination

Faculty of Science and Engineering, Waseda University, Tokyo, Japan
Show Author Information

Abstract

Neural rendering provides a fundamentally new way to render photorealistic images. Similar to traditional light-baking methods, neural rendering utilizes neural networks to bake representations of scenes, materials, and lights into latent vectors learned from path-tracing ground truths. However, existing neural rendering algorithms typically use G-buffers to provide position, normal, and texture information about scenes. These are prone to occlusion by transparent surfaces, leading to distortion and loss of detail in rendered images. To address this limitation, we propose a novel neural rendering pipeline that accurately renders the scene behind transparent surfaces with global illumination and variable scenes. Our method separates the G-buffers for opaque and transparent objects, retaining G-buffer information behind transparent objects. Additionally, to render transparent objects with permutation invariance, we have designed a new permutation-invariant neural blending function. We have integrated our algorithm into an efficient custom renderer, achieving real-time performance. Our results show that our method is capable of rendering photorealistic images for variable scenes and viewpoints, accurately capturing complex transparent structures along with global illumination. Our renderer can achieve real-time performance (256 × 256 at 63 frames/s and 512 × 512 at 32 frames/s) for scenes with multiple variable transparent objects.

Graphical Abstract

Electronic Supplementary Material

Video
cvm-12-2-321_ESM.mp4

References

【1】
【1】
 
 
Computational Visual Media
Pages 321-335

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Zhang Z, Simo-Serra E. Neural scene baking for permutation invariant transparency rendering with real-time global illumination. Computational Visual Media, 2026, 12(2): 321-335. https://doi.org/10.26599/CVM.2025.9450433

261

Views

21

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 06 December 2023
Accepted: 18 April 2024
Published: 20 March 2026
© The Author(s) 2026.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

To submit a manuscript, please go to https://jcvm.org.