InfoColorizer: Interactive Recommendation of Color Palettes for Infographics (original) (raw)
Related papers
Infographics Wizard: Flexible Infographics Authoring and Design Exploration
2022
Infographics are an aesthetic visual representation of information following specific design principles of human perception. Designing infographics can be a tedious process for non-experts and time-consuming, even for professional designers. With the help of designers, we propose a semi-automated infographic framework for general structured and flow-based infographic design generation. For novice designers, our framework automatically creates and ranks infographic designs for a user-provided text with no requirement for design input. However, expert designers can still provide custom design inputs to customize the infographics. We will also contribute an individual visual group (VG) designs dataset (in SVG), along with a 1k complete infographic image dataset with segmented VGs in this work. Evaluation results confirm that by using our framework, designers from all expertise levels can generate generic infographic designs faster than existing methods while maintaining the same quality as hand-designed infographics templates.
CoColor: Interactive Exploration of Color Designs
Proceedings of the 28th International Conference on Intelligent User Interfaces
Choosing colors is a pivotal but challenging component of graphic design. The paper presents an intelligent interaction technique supporting designers' creativity in color design. It fills a gap in the literature by proposing an integrated technique for color exploration, assignment, and refinement: CoColor. Our design goals were 1) let designers focus on color choice by freeing them from pixel-level editing and 2) support rapid flow between low-and high-level decisions. Our interaction technique utilizes three stepschoice of focus, choice of suitable colors, and the colors' application to designs-wherein the choices are interlinked and computerassisted, thus supporting divergent and convergent thinking. It considers color harmony, visual saliency, and elementary accessibility requirements. The technique was incorporated into the popular design tool Figma and evaluated in a study with 16 designers. Participants explored the coloring options more easily with CoColor and considered it helpful.
Predicting Visual Importance Across Graphic Design Types
arXiv (Cornell University), 2020
This paper introduces a Unified Model of Saliency and Importance (UMSI), which learns to predict visual importance in input graphic designs, and saliency in natural images, along with a new dataset and applications. Previous methods for predicting saliency or visual importance are trained individually on specialized datasets, making them limited in application and leading to poor generalization on novel image classes, while requiring a user to know which model to apply to which input. UMSI is a deep learning-based model simultaneously trained on images from different design classes, including posters, infographics, mobile UIs, as well as natural images, and includes an automatic classification module to classify the input. This allows the model to work more effectively without requiring a user to label the input. We also introduce Imp1k, a new dataset of designs annotated with importance information. We demonstrate two new design interfaces that use importance prediction, including a tool for adjusting the relative importance of design elements, and a tool for reflowing designs to new aspect ratios while preserving visual importance. 1
Learning Visual Importance for Graphic Designs and Data Visualizations
Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 2017
Knowing where people look and click on visual designs can provide clues about how the designs are perceived, and where the most important or relevant content lies. The most important content of a visual design can be used for effective summarization or to facilitate retrieval from a database. We present automated models that predict the relative importance of different elements in data visualizations and graphic designs. Our models are neural networks trained on human clicks and importance annotations on hundreds of designs. We collected a new dataset of crowdsourced importance, and analyzed the predictions of our models with respect to ground truth importance and human eye movements. We demonstrate how such predictions of importance can be used for automatic design retargeting and thumbnailing. User studies with hundreds of MTurk participants validate that, with limited post-processing, our importance-driven applications are on par with, or outperform, current state-of-the-art methods, including natural image saliency. We also provide a demonstration of how our importance predictions can be built into interactive design tools to offer immediate feedback during the design process.
Personalized UI Layout Generation using Deep Learning
International Journal of Innovative Research in Engineering and Management (IJIREM), 2024
This study presents a new approach to personalized UI design using deep learning techniques to improve user experience through interface customization. We propose a hybrid VAE-GAN architecture combining variational autoencoders and generative adversarial networks to create coherent and user-specific UI layouts. The system includes user-friendly electronic models that capture personal preferences and behaviors, enabling real-time personalization of interactions. Our methodology leverages large-scale UI design datasets, and user interaction logs to train and evaluate the model. Experimental results demonstrate significant improvements in layout quality, personalization accuracy, and user satisfaction compared to existing approaches. A customer research study with 200 participants from different cultures proves the effectiveness of the personalization model in real situations. The system achieves a personalization accuracy of 0.89 ± 0.03 and a transfer speed of 1.2s ± 0.1s, the most efficient state-of-the-art UI personalization system. In addition, we discuss the theoretical implications of our approach to UI/UX design principles, potential business applications, and ethical considerations around AI-driven identity. This research contributes to advancing adaptive interface design and opens up new ways to integrate deep learning with UI/UX processes.
ACE: a color expert system for user interface design
Color is used in computer graphics to code infOmUUi0~ to call attention to items. to signal a user, and to enhance display aesthetics, but using color effectively and tastefully is often beyond the abilities of application programmers because the study of color crosses many disciplines, and many aspects, such as human color vision, are not completely understood. We compiled a comprehensive set of guidelines for the proper use of color, but even these guidelines cannot provide all of the aesthetic and human factors knowledge necessary for making good color selections. Furthermore, progranuners may misinterpret or ignore the guidelines. To alleviate some of these problems, we have implemented ACE, A Color Expert system which embodies the color rules and applies them to user interface design. The goal of the implementation was to test whether an automated mechanism would be a viable solution to the problem of choosing effective and tasteful colors.
Exploring Content-based Artwork Recommendation with Metadata and Visual Features
ArXiv, 2017
Compared to other areas, artwork recommendation has received little attention, despite the continuous growth of the artwork market. Previous research has relied on ratings and metadata to make artwork recommendations, as well as visual features extracted with deep neural networks (DNN). However, these features have no direct interpretation to explicit visual features (e.g. brightness, texture) which might hinder explainability and user-acceptance. In this work, we study the impact of artwork metadata as well as visual features (DNN-based and attractiveness-based) for physical artwork recommendation, using images and transaction data from the UGallery online artwork store. Our results indicate that: (i) visual features perform better than manually curated data, (ii) DNN-based visual features perform better than attractiveness-based ones, and (iii) a hybrid approach improves the performance further. Our research can inform the development of new artwork recommenders relying on diverse...
User-Centric Semi-Automated Infographics Authoring and Recommendation
ArXiv, 2021
Fig. 1. Infographics Wizard implements a flexible framework for fulland semi-automated infographic generation. Based on user’s input in the markdown format (A), Infographics Wizard generates various recommendations for different main design components of infographics, including the Visual Information Flow layout (B), the design of individual Visual Groups (C), and the connecting elements between groups (D). The user can then explore these design alternatives (E) and assemble a final infographic of their desire. More experienced designers can optionally provide a main pivot graphic, the general information flow, or both on a canvas (F) via direct manipulation to control the generation and recommendation of infographic design.
Blucher Design Proceedings, 2023
Colour palettes have long played a significant role in not only capturing design ambience (e.g., as mood boards), but more significantly, in translating an abstract intuition into an explicit ordering mechanism for design representation and synthesis, whether it is in the discipline of graphic design, interior design or architectural design. Might this difficult process of design synthesis from a lowdimensional colour input domain to a high-dimensional spatial design output domain be computationally mapped? Using today's generative adversarial networks (GANs), the paper aims to investigate this plausibility, and in doing so, hoping to envision an AIaugmented design workflow and tooling. Newly-created datasets are made procedurally and used to train three different types of deep learning models in the specific context of generating living room interior layouts. The results suggest that a combination of syntactic and semantic generative processes is necessary for a critical appropriation of such AI models
Webthetics: Quantifying webpage aesthetics with deep learning
International Journal of Human-Computer Studies, 2018
As web has become the most popular media to attract users and customers worldwide, webpage aesthetics plays an increasingly important role for engaging users online and impacting their user experience. We present a novel method using deep learning to automatically compute and quantify webpage aesthetics. Our deep neural network, named as Webthetics, which is trained from the collected user rating data, can extract representative features from raw webpages and quantify their aesthetics. To improve the model performance, we propose to transfer the knowledge from image style recognition task into our network. We have validated that our method significantly outperforms previous method using hand-crafted features such as colorfulness and complexity. These promising results indicate that our method can serve as an effective and efficient means for providing objective aesthetics evaluation during the design process.