Design

6 reasons why UX designers should explore gestural interaction

4 min read
Yanna Vogiazou
  •  Mar 11, 2016
Link copied to clipboard

The movie Minority Report introduced us to the concept of gestural interaction: “speaking” to a computer by waving your hands—no clicking or swiping necessary.

But is gestural interaction really going to drive a new generation of interactive experiences, or is it just a tech trend that’s going to pass?

We’ve been fascinated with touchless interaction for quite some time. But we’re just starting to see gestures being incorporated in mainstream interactive productsTwitter Logo—and not just Microsoft Kinect games.

Image by GabboT. Creative Commons Attribution-ShareAlike 2.0 Generic.

Until now, user interface design has been focused on user interaction that requires a mouse, keyboard, and touch as inputs. Our tools have evolved to help us design better experiences for these inputs, while adapting to the specific guidelines of the different mobile operating systems and web standards.

But new inputs like voice and gestural interaction add new dimensions that also require new ways of thinking about user experience as a whole. Soon we’ll no longer only be referring to responsive websites as “touch-friendly” websites that adapt to different screen sizes. We’ll need to design truly multimodal experiences, combining various inputs in a seamless flow. We should add “gesture-friendly” to our vocabulary.Twitter Logo

“Designers need to add ‘gesture-friendly’ to their vocabulary.”

Twitter Logo

Advances in the field of computer vision—the way computers see and interpret our behavior—might turn gestural interaction into a daily reality faster than you might expect.

But it’s not just technology that drives innovation. Below, we’ll explore emerging user needs that we need to address by designing better ways of interacting with technology.

1. We need better tools to wade through increasing amounts of information

Think data visualizations, dashboards with customer analytics, maps with clusters, and layers of information. How do we navigate through this constantly growing landscape of data?

We use our mouse, as always, to click around the available navigation and buttons for manipulating a view. We use touch gestures to swipe through our mobile device interfaces and see more stuff. Or, we type in a few search terms and hope to get back the right amount of information that we can scroll through.

But what if our most familiar inputs and interfaces are limiting? Are there any better, alternative ways for us to interact with large amounts of data?

The future of big data interactions is multimodalTwitter Logo: a combination of what we have now, and more. Imagine making a hand gesture to get an overview of all the information, then pointing to an area to see more detail. And after that, actually touching the screen to select something. Moving smoothly from gesturing to touching, and vice-versa.

The increasing amount of data isn’t the only trend that will drive innovation in user interaction. Working in the startup world and collaborating with larger organizations, we’ve also observed another change.

2. Users interact differently when they’re sharing a screenTwitter Logo

The paradigm of a single user in front of a computer at work is changing. Collaborative workspaces are evolving with higher demands on teams, increasing the need for better communication and sharing tools. More and more we see the need for remote participants being able to interact efficiently with the same material that the team in the meeting room is discussing at that moment.

It’s no longer just about presenting slides to a group of people. It’s about being able to manipulate content in real time, to sketch, to brainstorm, and to piggyback on others’ ideas.

On the tech side, we’ve noticed screens increasing in size and resolution. Electronics manufacturers have started to explore use cases for TVs beyond the living room: how would people use them in public spaces, in shops, in the working environment.

Yet there’s one big constraint. Even the best designed remote control is still a single-user input device, less fit for multi-user interaction. Who controls the screen? How do multiple users pass the control of the screen from one person to another? And how can this be done effortlessly, without taking attention from the content or the unfolding conversation?

This trend has an immediate impact on our design process. Designing for multi-user experiences is very different than designing for single user experiences.

3. Screens can actually get in the way when we need to interact with things

Ideally, you’d be able to point at different objects around your home and tell them what to do—turn lights off and on, preheat your oven, mute the TV. This is a classic vision of the Internet of Things, where most of the time there are no screens involved at all. It’s hard to imagine this vision becoming a commercial product without some sort of combinations of hand gestures and voice input. After all, we don’t always want to be surrounded by screens that help us do simple things—we’d like things to respond directly.

The automotive industry is waking up to the potential of using gestures in the car. The 2016 BMW 7 Series is the first production car with gesture control, and we’ll soon see more innovation follow in this space. It’s not about the hype, though. For as long as we have to keep one hand on the steering wheel inside a vehicle, we can use the other to perform simple gestures to adjust the volume or answer an incoming call.

“UXers should explore gestural interaction—it’s a chance to invent new interactions.”

Twitter Logo

Gesturing or pointing in the air requires less precision and attention from the driver than tapping on a couple of buttons in a dashboard. If gestures are just quick shortcuts to frequent actions, you can really see the safety and convenience benefits.

4. We need new ways to interact within our fantasy worlds

Microsoft Kinect, the actual product that brought gestural interaction into the living room, wasn’t very successful because it didn’t meet gamers’ expectations. The audience, accustomed to the advanced, complex interactive experiences enabled by gaming consoles, found gesture-enabled games a bit too simple.  

Now let’s look at experiences beyond gaming. Visitors to the Mobile World Congress 2016 would comment that mobile-enabled virtual reality (VR) was the most prominently featured technology. VR enables a range of entertainment, educational, and voyeuristic experiences that are becoming mainstream.

Consider that last year, Facebook-owned Oculus Rift acquired Pebbles, a gesture recognition company. This technology will soon enable Oculus Rift users to freely interact with objects within those fantasy worlds using gestures.

Designing virtual reality experiences involves multiple input modalities. As users, we best experience these as a simulated reality when we have tactile and haptic feedback to help us navigate.

Teslasuit is a startup developing electro-haptic technology in the form of a smart textile suit that simulates bodily sensations normally transmitted through our neural system—like feeling touched or being cold.

To design those future simulated experiences, we’ll need to consider all human senses to imagine how our entire body interacts with the environment.

At the moment, VR and even augmented reality are quite niche, at least from the point of view of a UX designer looking for their next job opportunity. As we move towards more physical interactions with technology, this might change. What if a marketplace for VR consumer apps emerges and becomes truly mainstream? What happens when VR devices become cheap, widely accessible, and with amazing visuals? We need a broader understanding of user inputs and feedback within those spaces to design engaging experiences for the forthcoming future.

5. In some situations, interaction with technology is limited

There are many contexts outside the consumer space that UX designers could be designing for in the future. In some cases, we might even make the world a better place.

Consider everything that happens in an operating room before and during a surgery. A surgeon needs to enlarge a radiographic image on the computer to see details while wearing gloves in a sterile environment. Touching a mouse, a keyboard, or a screen becomes a potential health hazard in this context. Products like GestSure promise a valuable alternative. A simple gesture in the air changes the current view to show more information, at the right moment, without the need for assistance.

“We cannot rely on current UIs to support the interactions of the future.”

Twitter Logo

Anyone who’s ever seen medical software in use knows that switching between different views and data isn’t as trivial. It involves navigating through numerous options and hovering over cryptic icon buttons to get the right choice. So how can such software be used with air gestures?

This is where interaction design comes into play.

We cannot rely on current UIs to support the interactions of the future.Twitter Logo As computer vision technology evolves, enabling more reliable gesture detection, so should the user interface. This is about the role designers can play in order to make user interfaces gesture-ready, to best serve those user needs.

6. This is an opportunity to be creative and invent new interactions

Another reason why UX designers should explore gestural interaction is opportunistic creativity—the chance of driving innovation. I still remember the first time I designed a touch-based mobile phonebook back in 2007. It was exciting, it was full of unknowns, and there were hardly any rules, as no platform-specific UI guidelines for touch existed yet. All I had to guide me was knowledge of human computer interaction principles and instinct. Learning by doing was as powerful as it got.

“The future is multi-modal.”

Twitter Logo

We’re in a similar path of freedom right now. We have the chance to shape the design of successful combinations of gestures and UI feedback. We know that applying our knowledge in designing touch interfaces to gestural interaction won’t work. We’re free to invent new guidelines and UI pattern libraries, challenge them, test through practice, and improve them.

Finally, a little piece of inspiration. Our engineering team has been developing a grab-and-drop gesture for selecting and extracting objects. We turned this exercise into a colorful playground by creating a gesture-controlled color picker. Just grab colors and drop them to create gradients. No client requested this prototype—this is our own experiment in interaction.  But its simplicity and playfulness are irresistible to anyone who’s tried it.


Remember: the future is multi-modal.Twitter Logo It’s not just about gestures—it’s about creating the best combination of user inputs and designing the right interfaces to support the user needs at hand. And it’s those user needs that are the driving force for innovation.

Next time you get a brief for a new project, try answering these questions: which interaction paradigm would you choose to best address your users’ needs, if you could choose any device? Are you looking at a single user experience or could there be more people involved? How would you combine the flow from one modality to another?

Collaborate in real time on a digital whiteboard