The French saying, ‘beau geste’ or beautiful gesture, is used to describe an action that is noble but may result in unintended consequences. This term could aptly describe a few of the user interfaces we encounter day to day. These interfaces may look beautiful, but in action result in a less than perfect experience.
As technology changes, the ways we interact with it change as well. With the introduction of motion sensing devices, such as Thalmic Lab’s Myo, the Leap Motion, and Microsoft’s Kinect for Windows, our interfaces are no longer tied to flat two-dimensional planes. Instead, we’re free to start using natural gestures in three-dimensional space. Granted, most of these gestures are used to interact with 2 dimensional objects, so motions tend to mirror the taps, drags, and pinches we already use on touch-based devices.
As gesture controls evolve, it’s not difficult to imagine gestures eventually becoming more abstract and disconnected from what we see represented onscreen. Much like keyboard combinations that are used to shortcut UI interactions, emblematic gestures will likely be adopted to symbolize a combination of motions.
This gives us an incredible opportunity to design new types of experiences. However, if we ignore the context in which these gestures are used we can create room for unintended consequences.
Gesture-based interfaces take advantage of the fact that, as humans, we use gestures naturally. It’s interesting that the use of gestures tends to be universal. There isn’t a community on earth that doesn’t use them in some form or another in order to communicate.
As the technology improves and becomes ubiquitous, it’s natural to assume that new interface conventions will eventually be defined. Well meaning designers and developers may wish to tap into the rich set of gestures we use every day in order to make their interfaces feel familiar and easy to use. While some simple gestures, like smiling, may be universally understood, we need to be careful not to mistakenly think of all gestures as shared.
Most gestures are tied directly to language. This makes sense when you consider that gestures and speech are both processed by the same areas of the brain (Broca’s and Wernicke’s areas). This is why American Sign Language (ASL), the third most spoken language in the United States, is not spoken globally.
Culture influences gestures as well. An okay, or thumbs-up, hand signal make indicate all is well to an American, but they may communicate something completely different a person of another culture. Similarly, while most of us may understand a head nod to mean ‘yes’ someone from Bulgaria may interpret the same nod as meaning ‘no’.
Clearly then, interfaces shouldn’t rely heavily on gestures common to a particular language group, or culture. Gestures that feel familiar and natural to one group of users may feel unintuitive or even offensive to another.
It is possible to bridge the gap between users by relying on the same techniques we currently use in visual interfaces to localize content. Sets of recognized gestures could be defined for supported languages and regions. For example, an interface could expect users in North America to nod their head up and down on the sagittal plane to communicate yes, while it anticipates that users in South Asia may provide the same answer by tilting their heads side to side along the coronal plane.
A completely different solution might be to codify new auxiliary languages with small, context-driven sets of signals. A good example can be seen in the set of gestures used by air marshallers to visually communicate instructions to pilots sitting on the tarmac. Creating sets of gestures would require a coordinated effort among designers, but would encourage an improved experience for the majority of users.
New motion-based technology gives us the potential to develop beautiful, rich experiences for our users. However, what we create with this technology shouldn’t carry with it unintended consequences. As interfaces evolve let us strive to create beautiful experiences for all.