AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (19.8 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

DragTex: Generative point-based texture editing on 3D mesh

School of Computer Science, Beijing Institute of Technology, Beijing 100081, China
Show Author Information

Abstract

Creating 3D textured meshes using generative artificial intelligence has garnered significant attention recently. While existing methods support text-based texture generation and editing 3D meshes, they often struggle to precisely control pixels of texture images through intuitive interaction. Meanwhile, editing texture images by direct image deformation with precise point-to-point control does not necessarily guarantee that the generated mesh textures are a good match to the interactive intent. Existing work supports generative editing of 2D images via dragging interactions, but applying such methods directly to 3D mesh textures still leads to problems such as lack of local consistency between multiple views, error accumulation, and long training times. To address these challenges, we propose a generative point-based 3D mesh texture editing method, DragTex. It utilizes a diffusion model to blend locally inconsistent textures in the region near the deformed silhouette between different views, enabling locally consistent texture editing. We further fine-tune a LoRA decoder to reduce reconstruction errors in the non-dragged region, thereby mitigating overall error accumulation. Moreover, we train LoRA using multi-view images instead of training each view individually, which significantly shortens the training time. Our experimental results show that our method can effectively drag textures on 3D meshes and generate plausible textures that meet the user’s intent.

Graphical Abstract

References

【1】
【1】
 
 
Computational Visual Media
Pages 381-394

{{item.num}}

Comments on this article

Go to comment

< Back to all reports

Review Status: {{reviewData.commendedNum}} Commended , {{reviewData.revisionRequiredNum}} Revision Required , {{reviewData.notCommendedNum}} Not Commended Under Peer Review

Review Comment

Close
Close
Cite this article:
Zhang Y, Xu Q, Zhang L. DragTex: Generative point-based texture editing on 3D mesh. Computational Visual Media, 2026, 12(2): 381-394. https://doi.org/10.26599/CVM.2025.9450469

243

Views

12

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Received: 22 March 2024
Accepted: 06 November 2024
Published: 20 March 2026
© The Author(s) 2026.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

To submit a manuscript, please go to https://jcvm.org.