Multi-modal design: Gesture, Touch and Mobile devices…next big thing?

What is Multi-modal interaction?

Multi-modal interaction is an area of Human-Computer Interaction (HCI) with a long history of usability research and empirical study. The virtual reality and game design research community have been studying this commercially since the early 1980’s with pioneering researchers such as Brenda Laurel at Atari.

Today, researchers like Jeff Han, see images left, are meshing with industrial efforts to bring multi-modal
interaction to life. Last week, Microsoft announced Surface– an interactive coffee-table surface that responds to touch and gesture. Users can explore information linked to objects placed on the desktop (TMobile will use it in stores to encourage handset purchase decision support).

The scenario videos on the Microsoft Surface website are worth a look. Very well presented.

Also, if you haven’t seen Jeff Han’s TED conference presentation, it’s an amazing must-see demo of his Surface-type work-space.

Why multi-modal interaction design?

Multi-modal design brings the spirit of HCI to life by harnessing the rich sensory input afforded by the human body-mind (touch, gesture, sight, sound, voice…smell, well not yet, but it’s in the works).

As interface designers, we have really had to compromise with the flat and lifeless limitations of desktop PCs (windows, icons, menus) compared to the original vision of how humans should use computers advocated in the late 1960’s:

Computer graphics and interface pioneer Ivan Sutherland told us our computers should not be mere 2D screens that provide information, but instead, they should be ‘windows upon which we look into a virtual world…where we can see, hear and feel’ multi-sensory information.

I began studying multi-modal interaction ten years ago during my early virtual reality research. It’s an area of interface design that is truly fascinating for its potential. It’s also an area of interface design that is challenging due to the context in which the user is interacting. As a designer, the questions become:

  • “Which sensory pathway does the user have available to complete this goal in that context?”
  • “Which sensory system is the lead, which is the secondary?”
  • “How much sensory over-lap is available, tangible, appropriate?”
  • “How do users back out or recover from a screen event- in a dynamically changing physical environment?”

Does gesture and touch interaction work in other contexts…like cars?

For the past five years, I have closely followed the emerging ‘Internet in your car’ trends in the automotive industry aka “telematics usability”. A practical example I can share, which I’ve written about and spoken about at telematics automotive conferences, is the case of GM’s OnStar. Several years ago, GM provided me with a fully loaded Cadillac CTS for a week that I used to evaluate the OnStar, a speech system for help, navigation, communication, and additional information.

OnStar weighted its user interface toward “voice” or speech interaction over a multi-modal interface. The result was a clunky system with a history of poor user adoption and satisfaction. In 2001, 60% of Onstar systems were switched off in the owner’s vehicles. BMW on the other hand, weighted it’s iDrive telematics solution to a knob-like control (tactile interaction) with 700 features in menus at the turn of a dial.

The result: eroded brand loyalty and confused and frustrated customers (including usability guru and BMW customer, Jakob Nielsen). Jakob Nielsen’s wife said at the time she would never buy another BMW again…

The pattern in the design flaws for telematics human factors engineers?

Don’t put all your “eggs in one basket” with regard to one modality.

It appears neither GM nor BMW provided adequate multi-modal support, opting for a “lead” sensory system (speech, touch) over a mixed system.

I believe multi-modal is generally always better than singular modality (as an interaction design technique). But you must be careful if you are designing for multi-modal interfaces, as Oregon Graduate Institute Professor Sharon Oviatt reminds us in her Ten Myths of Multi-modal Interaction (PDF).

 

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Recent Posts

Scroll to top

Get a quote or discuss your project

Tell us about your project

Arrange a 30 min call

Project in mind?

logoblack

Fight for the rights of your users. We'll show you how.

Read more articles like this for exclusive insights into the best ways to approach UX and Service Design challenges. Find out when events occur first. Privacy protected, no exceptions.

Subscribing indicates your consent to our Privacy Policy

Should we add you to our email list?

Privacy protected-You can unsubscribe at any time.

Download the Better UX kit