Generating design systems using deep learning



Original Source Here

Generating design systems using deep learning

How do you enable people to easily design beautiful mobile and web apps when they have no design background?

“Easy: you just throw some neural networks at the problem!”

“Well, kinda”

You might have heard of Uizard already through our early machine learning research pix2code or our technology for turning hand-drawn sketches into interactive app prototypes.

Today, I would like to talk about one of our new AI-driven features: automatic theme generation.

Design is hard for non-designers

Over time and throughout our private beta program, the majority of our recurrent active users started to increasingly become startup founders, product managers, consultants, business analysts, and marketing teams. We’ve also seen traction among user experience (UX) professionals and user researchers, who typically work closely with design teams but aren’t trained as graphic / UI designers themselves. Put simply, Uizard became a design tool for non-designers.

Uizard evolved to become an easy-to-use design tool where you can create mobile apps, websites, and desktop software screens with drag-and-drop components and templates — instead of pixels and vectors like in the majority of UI design tools. You can also upload hand-drawn wireframe sketches and get them automatically transformed into editable screens. Pretty easy to use!

Because Uizard is a component-based design tool, components are organized as a minimalist design system: the Uizard theme (check this link or this one if you’re curious about atomic design and design systems). Generally speaking, Uizard themes are a means to organize and categorize components, define their colors, typographies, and styling properties. These components (e.g. buttons, labels, input fields) can be assembled into reusable templates (e.g. login form, image gallery, payment section) that can then be used to design screens and apps.

Although Uizard comes with a few pre-made themes, until now it was hard for our non-designer customers to:

  • Easily create new themes that would look good given that they don’t have a design background.
  • Easily import existing design artifacts into Uizard to make sure that their projects would match their company’s brand identity.

Two of our early adopters mentioned this limitation during our private beta phase: Valentin de Bruyn and Simon Hangaard Hansen (thanks again guys!).

We then asked ourselves: can we use deep learning to generate themes automatically?

Turns out, we can indeed use deep learning and neural networks to do just that! Woohoo! You’ll understand in the section below why machine learning is needed here and why we can’t simply rely on “traditional” software development.

As described in this blog post, we’ve built a system able to extract components, recognize fonts, extract design tokens and extract styling properties (text transform, border radius, padding, font weight, shadow, etc.) from images, URLs, and even Sketch files. This allows our customers to:

  • Easily generate original themes from any source of visual inspiration.
  • Easily extract components and styling properties from an existing brand identity, style guide, design system.

To the best of our knowledge, this is a world’s first. (Yes, we are pretty proud of our technology, sorry about that…) Adobe has built a tool to extract colors from images, but the system is unable to extract any other information. Someone built a tool to extract design tokens from a URL, but the system is only able to pull out a list of colors, fonts, and spacing information but no fully ready-to-use components.

So how did we make it work?

Generating a design system from an image

Since we had already developed an entire neural network pipeline for transforming user interface sketches into editable screens, we could reuse some of the building blocks for this new feature. Although a hand-drawn wireframe looks different from a user interface screenshot, both images represent the very same concepts: a button, a body of text, an input field, a footer with icons, etc. Although they look slightly different, both types of image are coming from the same domain.

Our neural network pipeline is trained to process an image as input and identify components, recognize fonts, and extract styling properties (text size, border radius, padding, font weight, shadow, etc.). This enables the creation of a custom-made Uizard theme in seconds by simply uploading a single image.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot



via WordPress https://ramseyelbasheer.wordpress.com/2021/02/17/generating-design-systems-using-deep-learning/

Popular posts from this blog

I’m Sorry! Evernote Has A New ‘Home’ Now

Streamlit — Deploy your app in just a few minutes

Jensen Huang: Racism is one flywheel we must stop