Design systems

How Salesforce is using AI to create a more hyper-informed, more adoptable design system

4 min read
Rebecca Kerr
  •  Feb 27, 2020
Link copied to clipboard

What if you were floating on an ocean of design data—millions of small decisions made across industries, cultures, and countries? How would you begin to fish out the innumerable, invaluable insights and patterns hiding there? Salesforce is on course to find out.

It begins with Salesforce’s Lightning platform. Customers can use it to drag, drop, and tweak components until they bring an entire working application into existence—no coding skills required. Those with developer resources can also build apps with any coding language on the platform.

Often, the people who use Lightning to build business applications and mobile experiences are sitting in sales operations, marketing, IT, or human resources. In other words, they’re not designers by vocation.

That means one of the most crucial parts of the platform is the Lightning Design System (SLDS). A team of Salesforce designers and engineers built it to recommend design patterns, styles, and coded components to Salesforce customers. And now a team of UX minds is working to create AI tools that could tap into the vast amounts of design data across all Lightning experiences.

We spoke with a few of the key players on the Salesforce User Experience team about the future they foresee when machine learning is built into a customer-facing design system, namely:

  • Owen Schoppe (pictured above, right), principal product designer, UX R&D
  • Emily Witt (pictured above, left), senior user researcher, Trusted AI
  • Ambreen Hasan (pictured below, left), senior UX engineer, Design Systems
  • Alan Weibel (pictured below, right), principal product designer, Design Systems
Ambreen and Alan work on design systems tooling at Salesforce

What’s the relationship between machine learning and design systems at Salesforce?

Alan: We’re running algorithms and scripts to see how people are using styles, components, and patterns.

Owen: There’s a whole ecosystem of people outside Salesforce’s product teams who use Lightning to build apps and user experiences across all of our products. For instance, customers design self-service portals, storefronts, and landing pages in Commerce Cloud. Marketing Cloud is the most used email service provider of all vendors in the space.

That constant customization work our customers are doing creates mountains of design data. Imagine the trends and insights you could glean from all of that information. What patterns are they using most? What new pattern is emerging as a trend? How does color vary between industries or cultures? And the central challenge: How can we work with customers to find a way to answer these questions while maintaining the data privacy and trust we’ve built?

Alan: Right now the way we find out about new patterns is through a manual design review process with internal design teams. It would be impossible to manage the same review for customer designs at the scale we’re dealing with.

But what if a script could look at every customization and understand shapes, typography, colors, how components are arranged? What if it could get smart enough to recommend a color, or notify our own design system team about an emerging pattern? What if you could build a script to automate all of that?

It could be just as powerful for our own internal design operations team—what patterns and components are Salesforce designers using most? What can we streamline or develop to stay ahead of emerging trends?

We haven’t even begun to tap into the potential of this data.

Machine learning can surface patterns in the way people use components.

How much of this is already running in production, and how much is still being built?

Owen: Some of the algorithms we need are already built, and now we’re starting to repurpose them for design systems. Lightning already has so much of the data we need built-in. The components already have names, and each page already declares which piece is a button or a text field. So the component usage map already exists internally.

As with any AI project, the first step is a lot of heavy manual data science work. Data science is about training the algorithm on what to look for, to recognize the areas of code that are cookie-cutter patterns and spot things that are different because they’ve been customized. We have to slog through a lot of manual data entry, tweaking the direction of the algorithm. We dive into metadata from existing designs, and look for trends like radio button placement, custom field arrangement, and color usage across industries.

It’s hard work, and people start to worry, “Are we ever going to get there?” There might be a lull before it shows any benefits, and you need organizational support to get through that.

We’ve come a long way with that work, and now we’re starting to be able to say things like, “If you want to stand out from your competitors, these are the colors you should use.”

How do your current AI capabilities affect developers using Lightning to build Salesforce products? 

Ambreen: Internal Salesforce developers aren’t forced to use Lightning Design System, but in our experience, they want to because of the value they get from it. Our devs get a report card that grades how closely the CSS they deliver matches the design system components. Sometimes you can visually inspect a button on the page, and it can look perfectly identical to the recommended button style. But the analysis looks at how they implemented the code for the button. It can detect that they didn’t use the classes, so it’s not fully using the design system. When that feedback comes out in the report card it gets them to really, really want to use the design system.

We’ve seen overall CSS payload shrink by big percentage points because of this. There have been large-scale feature wins too: When we shipped a new cozy and compact view, it was much easier to ship due to the fact that we had reached a tipping point in product adoption. For the first time, we were able to change a lot of the CSS in one step, from one central place, and we were able to quickly ship an important new feature to all Salesforce products.

How do you think through the ethics of using AI to influence design?

Emily: First, we always need to protect our customers and their data when training our models, so, for this technology, we’re building machine learning tools that only include metadata. The models look at the types of components on a page and their placement in relation to each other, but we never extract any of the information entered in those fields.

Second, we see this as an opportunity for us to try out new processes for thinking through both the positive and negative impacts of emerging technology. For one thing, we’re piloting consequence scanning. It’s a method created by DotEveryone, a nonprofit tech responsibility organization based in the UK. In consequence scanning, which is meant to fit into your existing agile processes, you brainstorm potential positive and negative consequences of what you’re building. Then you brainstorm ways to mitigate the negative consequences, and account for those mitigation strategies in your agile planning.

One easy example of a potential negative consequence might be in the area of accessibility.

Owen: Yes, for example, in our datasets, we may see that a gray-on-gray scheme is trending, indicating that the majority of people currently in designer roles seem to love it. But it doesn’t meet accessibility standards for visually impaired people. It’s on us to put that data through accessibility checks that can evaluate whether the tool should recommend that trending gray scheme. Yes, people are using it, but if it doesn’t meet standards, it shouldn’t get recommended. Our job is to add that extra layer of nuance. The data isn’t something to follow arbitrarily.

Emily: Third, we have to be inclusive in our user research as we think through this technology. We try to diversify user feedback and training data sources, so we’re not oversampling one group or using one method. We have to recognize that it’s not just about deciding what goes into the model, but understanding the context for use. User research is a powerful tool for understanding needs, use cases, and context for use, but we can’t only rely on our users to help us think through potential negative outcomes to our users or customers, so we need to supplement our user research with other methods, like consequence scanning.

Beyond that, another set of consequences we’re thinking about involves the impact to the design industry. We have to consider the precarious balance between augmentation and automation. We’re in many ways designing the workplace, so how we can help customers not just automate, but support the human work of design with better data and insights?

Just like with everything we build, we want to be responsible about the development of this new tool. You can read more about the Salesforce approach to the ethical and humane use of technology on the website.

Do you think this tool has the potential to improve inclusion and accessibility in design?

Emily: Yes. Without this data, the patterns we recommend in Lightning might be informed by a small group of designers in one city. We could have some of the biggest companies from all over the world building storefronts using that pattern, and they would be using something made by a group from one corner of the globe. Then other companies and cultures start to emulate the same pattern. It starts to spread, and people end up changing their patterns because a few people in California decided that’s how it should be, even though that decision may have been influenced by one small group’s bias. No one in this scenario has bad intentions; it’s just a process perpetuating a bias.

With this tool our customers could have a much more diverse knowledge set to draw from—millions of tiny choices and patterns from all kinds of places, people, and cultures. Salesforce customers don’t want to perpetuate biases. When we talk to them about our AI capabilities, they want features that help them do good. They look to Salesforce for guidance and best practices, not just for internal experiences, but for external flows that will be seen by everyone.

How might this technology affect design teams in their future work?

Alan: The most immediate benefit is the research capabilities. This data helps you get a very accurate sample size, so you can make better-informed decisions about new features for customers.

It also helps with pushing the envelope. If you already know what’s out there in the world, you can innovate beyond that.

And imagine if we fine-tuned this algorithm to the point where we had the right data to come up with suggestions that inform what you’re working on, not just neutral facts. Instead of starting from zero, you could start with what you know is going to work, and build from there.

In the future a team might kick things off by taking a look at what machine learning has produced and work with a much higher level of confidence from the very beginning. It gives you a jumpstart so, you can spend more time researching and building what’s important. You’d get to certainty faster and ideally end up with a stronger product.

To learn more, follow Salesforce Experience and Design on Twitter.

Collaborate in real time on a digital whiteboard