The New Frontier of Multi-modal Interfaces: Are you ready?
Humans communicate and learn by using a combination of our senses. Over the past few years, there has been been a great deal of excitement over voice interfaces and the concept of “voice first” design. While important in and of itself, I believe “voice first” is a transitional step to next phase of UX design: true multi-modal interfaces. These new interfaces will adapt to the task at hand, selecting the medium and message most convenient to the user in any given situation. For example, it is easier to tell your car a destination using your voice rather than typing, but it is easier to monitor your car’s current location and time remaining on the trip by glancing at a display. Giving users the ability to seamlessly jump between communication modes within a single application may be the key to creating truly effective and pleasing human-computer interactions.
As the Director of UX for SiriusXM’s Connected Vehicle Group, I have the privilege of seeing the evolution of these interfaces first hand. I’ve prepared this session as a primer for experienced UX practitioners ready for the jump to multi-modal interface design. Where is industry adoption right now? Where do experts predict it to be within the next three to five years? How will these new interfaces be prototyped and constructed? What processes, tools, and patterns will we be able to carry forward, what will need to be updated, and what still needs to be discovered? While we will not have time to cover every possibility, this session will provide you with a framework to begin your own investigations into this new frontier of user experience.