Dũng Đỗ Trung - Profile on Academia.edu (original) (raw)

Faisal Qureshi related author profile picture

Furkan Kıraç related author profile picture

SHIVAM SAHNI 18110159 related author profile picture

Summra Saleem related author profile picture

Diganta Mukherjee related author profile picture

Jireh Jam related author profile picture

Amjad Bashayreh related author profile picture

Bavithra Arumugam related author profile picture

Hao Li related author profile picture

Hao Li

Swiss Federal Institute of Technology (ETH)

Siyeong Lee related author profile picture

Uploads

Papers by Dũng Đỗ Trung

Research paper thumbnail of SC-FEGAN: Face Editing Generative Adversarial Network with User's Sketch and Color

We present a novel image editing system that generates images as the user provides free-form mask... more We present a novel image editing system that generates images as the user provides free-form mask, sketch and color as an input. Our system consist of a end-to-end trainable convolutional network. Contrary to the existing methods, our system wholly utilizes free-form user input with color and shape. This allows the system to respond to the user's sketch and color input, using it as a guideline to generate an image. In our particular work, we trained network with additional style loss [3] which made it possible to generate realistic results, despite large portions of the image being removed. Our proposed network architecture SC-FEGAN is well suited to generate high quality synthetic image using intuitive user inputs.

Research paper thumbnail of SC-FEGAN: Face Editing Generative Adversarial Network with User's Sketch and Color

We present a novel image editing system that generates images as the user provides free-form mask... more We present a novel image editing system that generates images as the user provides free-form mask, sketch and color as an input. Our system consist of a end-to-end trainable convolutional network. Contrary to the existing methods, our system wholly utilizes free-form user input with color and shape. This allows the system to respond to the user's sketch and color input, using it as a guideline to generate an image. In our particular work, we trained network with additional style loss [3] which made it possible to generate realistic results, despite large portions of the image being removed. Our proposed network architecture SC-FEGAN is well suited to generate high quality synthetic image using intuitive user inputs.

Log In