Designers and entrepreneurs have been fascinated with AI and conversational design for years. So I wasn’t surprised when a potential client asked me about AI:
“We’re building an AI assistant for enterprises, and it needs to combine conversational design with machine learning.”
“Can you design it?”
I said, “Of course we can design it, but it will take a lot of user research.”
Designing an AI application, particularly one with a voice interface, is a huge project. Even with the popularity of Alexa, Google Assistant and Siri, there’s still a great deal of ambiguity about how to build this type of technology.
But there’s good news.
The human-centered design process is made to help you handle ambiguity. If you follow an in-depth research process, you can uncover solutions for any problem.
Research is a huge priority for us, so even though designing AI was a new world to most of the team, we knew that we could pull it off.
Here’s a closer look at the research process that brought this project to life.
Phase 1: Discovery and planning
“The human-centered design process is made to help you handle ambiguity. If you follow an in-depth research process, you can uncover solutions for any problem. “
The client’s vision was to create an AI application that helped enterprise employees save time, make easier decisions, and focus on the work they actually wanted to do.
A noble goal, but it’s one others had tried to achieve before and failed. For this project to succeed, we had to intimately understand how this application would fit into our audience’s lives—both emotionally and practically.
Before we made any design decisions, we began the Discovery & Planning phase.
For this project, two particular techniques stood out: stakeholder interviews and competitive analysis.
Stakeholder and user interviews
I’ve worked for several enterprises in my career, so I had an understanding of how complex the users’ environment would be. To create something that people would actually use, we had to identify common patterns that people were dealing with across organizations.
Our client had worked in an enterprise for decades, so they were able to provide use with a sense of the recurring problems that plagued our audience.
Essentially, the idea for the enterprise virtual assistant arose from the sensory overload that’s intrinsic to massive corporate environments. The sheer scale of an enterprise often overwhelms people.
Onboarding new hires requires guiding some poor soul through a labyrinthian knowledge base. Few, if any, can successfully understand enterprise systems alone.
New employees, or even employees who travel, often don’t know how to book a meeting because they have no idea what’s feasible.
What’s more, juggling these kinds of tasks takes a toll on people. Research indicates that multitasking, or “cognitive switching,” can consume as much as 40% of someone’s productivity.
After our preliminary interviews, we began to build a sense of the problems facing our audience. To contextualize how they might think to solve their challenges, we needed to understand the competitive landscape.
In terms of AI assistants for the enterprise, there wasn’t a tremendous amount deal of direct competition.
However, by analyzing the main players in the voice interface / AI assistant industry, we were able to identify existing patterns for conversational design.
Interestingly, we noticed there are two main approaches to conversational design: guided and unguided. For example, when you make a hands-off Alexa, she will ask repeated follow up questions to make sure she accurately completes the task.
In contrast, Google Assistant takes a much more hands-off view. The system more or less expects users to know what to ask, and offers little guidance.
Phase 2: Ideation
After we finished our discovery research, we moved on to phase 2: Ideation (here called brick-laying for presentation purposes).
The concept here is to create a more complete foundation and develop our ideas for the experience design phase.
As we move through the process, our research became more granular.
Instead of only conducting interviews, we tried to observe users in their work environment. To supplement our qualitative research, we conducted surveys to provide quantitative data.
Every project is different at a UX firm, but we always try to utilize as many of the research techniques listed above as budget, necessity, and scope allow.
Thanks to thorough user research, the idea of designing an AI voice assistant was becoming less and less daunting. The solution was taking shape because of our growing understanding of the audience.
The most important thing in user research is to stick to your process. Don’t waver. The predictability of your approach is what distinguishes experienced UX designers.
Let’s highlight a couple of techniques that were particularly important in this phase.
“The most important thing in user research is to stick to your process. Don’t waver. The predictability of your approach is what distinguishes experienced UX designers. “
Personas and empathy maps
As we developed a better understanding of who, exactly, our end-users were, we began to translate our research into personas.
I realize personas are far from an advanced technique, but when we combined them with empathy maps, they played a critical part in this project.
Because there’s a natural hierarchy in enterprises, the specific way each persona would use the AI assistant differed.
Our business consultant persona would use the application more for time tracking, while our CEO persona would rely more heavily on meeting scheduling.
Each persona helped us narrow down the most important use cases for each segment of our audience. Rather than blindly develop commands for the voice interface, we began to understand what people would find most valuable.
Plus, we discovered something even more important when we augmented our personas with empathy maps: each cohort had a distinct emotion they needed to feel toward the AI assistant.
The research made clear that if people didn’t connect emotionally, they wouldn’t adopt the assistant.
Needless to say, this was a watershed moment in our research, because it gave us a clear direction for how to design the AI’s conversational capabilities—and how to shape its personality.
Ethnographic Field Studies
I’ve always been a proponent of observing users in the field.
Observing users where they spend the majority of their time using your application is critical to uncovering their biggest challenges and most important goals.
After conducting multiple field studies in an enterprise environment, several things became clear:
1: The application needed to have other means of input in addition to voice.
Business is fundamentally a private affair. So even if the voice functionality worked flawlessly, there might be times when our audience didn’t want to announce their task to others.
For example, new employees might not want to ask a question out loud for fear of embarrassment.
Further, the audience was often traveling. So while an initial request with voice might be fine, the environment could change suddenly, and they might need to switch to a text input instead.
“Observing users where they spend the majority of their time using your application is critical to uncovering their biggest challenges and most important goals. “
2: The AI needed to be anticipatory.
I’ve read Alexa has over 1,000 skills that she can perform. The all-too-obvious problem is that no one knows what to ask her.
The same challenge exists with any voice interface. People need to know what they can ask it. Even with a limited amount of functionality, we still observed that users needed guidance to understand what the assistant was capable of.
Image copy: To make it easier for users to start a task, we created visual prompts that serve as guideposts.
Similarly, users craved an AI assistant that was able to change parameters mid-request and complete a task with new information—similar to how a person would.
For example, when a business consultant logged a large number of hours, the application would let them know that they were approaching the project threshold.
In this way, users would learn to trust the assistant with important tasks. As a result of its proactive functionality, the AI inspired feelings of of assurance and trust—emotional needs we knew had to be fulfilled thanks to our research.
Translating what we learned
Like every UX design project, research was fundamental to the success of the experience design.
By adhering to our process in this project, we learned:
- The primary tasks our audience was struggling with
- Emotional needs the assistant needed to fulfill in order to succeed
- How to structure the personality of the AI
- What functionality was most the highest priority for our audience
Of course, these were only the first few acts of the design process. But that’s a topic for another day.
What’s important to remember is that, regardless of the complexity of a project, there are established research techniques and processes you can use to inform your design.
When ambiguity strikes, rely on your research.
Pavel Bukengolts is the Director of User Experience at DePalma Studios. With over twenty-five years of experience in behavioral design and user experience, Pavel has lead teams on expansive projects for brands like United Airlines, Healthtrust Purchasing Group, and Sitecore.