Abstract

Stroke-based rendering aims to reconstruct an input image into an oil painting style by predicting brush stroke sequences. Conventional methods perform this prediction stroke-by-stroke or require multiple inference steps due to the limitations of a predictable number of strokes. This procedure leads to inefficient translation speed, limiting their practicality. In this study, we propose MambaPainter, capable of predicting a sequence of over 100 brush strokes in a single inference step, resulting in rapid translation. We achieve this sequence prediction by incorporating the selective state-space model. Additionally, we introduce a simple extension to patch-based rendering, which we use to translate high-resolution images, improving the visual quality with a minimal increase in computational cost. Experimental results demonstrate that MambaPainter can efficiently translate inputs to oil painting-style images compared to state-of-the-art methods. The codes are available at this URL.

Demo

Source Code

https://github.com/STomoya/MambaPainter

Reference

Tomoya Sawada and Marie Katsurai, “MambaPainter: Neural Stroke-Based Rendering in a Single Step,” in SIGGRAPH Asia 2024 Posters (SA ’24), Association for Computing Machinery, New York, NY, USA, Article 98, 1–2, 2024. DOI: 10.1145/3681756.3697906.

Categories: Multimedia

0 Comments

コメントを残す

Avatar placeholder

メールアドレスが公開されることはありません。 が付いている欄は必須項目です