Pages

Total Pageviews

Tuesday, May 22, 2007

Where Is Intervoice Going With It's New Multimodal Self-service Interface Technology?

Copyright Ó 2007 The Unified-View. All Rights Reserved Worldwide

May 21, 2007

Executive Interview:

Intervoice’s UC Breakthrough For Multimodal Self-service Applications

By Art Rosenberg, The Unified-View

Intervoice is an experienced name in the IVR and contact center industries, not only as a provider of traditional voice response technologies, but also as a full-service designer and developer of customized voice and speech self-service applications for over five-thousand satisfied customers. So, when they suddenly announced their new, “multimodal” approach to self-service applications for mobile “smartphones,” I applauded the first industry move towards supporting the interface needs of unified communications where the need was greatest, the mobile consumer.

Of course, I had a lot of questions about exactly where Intervoice was going with their technology and welcomed the opportunity to sit down with Ken Goldberg, Intervoice’s Senior Vice President of Corporate Development & Strategy, to discuss them at length. I consider the information important for anyone interested in where telephone-based voice applications will be going in the coming world of UC and mobile devices.

To put things into a technology and application perspective, Intervoice claims to be first to market with a software applications platform, a Multimodal Portal, that exploits a new W3C language standard, State Chart XML (SCXML). This enables voice interfaces to easily integrate concurrently with existing IP-based visual application interfaces to provide the user with a flexible choice of voice, visual, or combined “multimodal” interfaces to self-service applications through any user communication device.

Such flexibility will be particularly useful for mobile users who have screen-enabled smartphones, but who may be dynamically constrained by their environment as to what medium to use for input and/or output (voice or text). Whether in a noisy or “silence required” environment, or where an “eyes-free, hands-free” interface is needed (driving), dynamic application interface flexibility will help consumers give or get information whenever they need to. Multimodal interface technology can also be used for any business process applications used internally by enterprise personnel or by external business partners.

1. AR: What major business problems is Intervoice addressing with its multimodal
interface technology?

KG: Our customers are still interested in the same major issues – how to lower the cost of servicing customers yet increase customer satisfaction and brand loyalty. For more than 20 years, Intervoice has been developing voice self-service applications for telephone callers wanting information. We saw dramatic cost reductions as companies introduced touch-tone self-service. But it was speech that started to increase caller satisfaction. The use of speech interfaces in telephone self-service transactions was a big improvement in expanding the use of such applications.

However, there are limitations in the use of a speech interface to exchange information. Noisy environments or where language translation is not highly effective, all negatively impact the customer experience for self-service telephone applications. These inherent limitations on voice create a need for adding the efficiencies of a visual user interface and for giving the user greater flexibility and choice of how they want to interact with a business process application.

There are also inefficiencies in using voice alone for informational output, especially for mobile users. With consumers becoming more mobile and multi-channel, we can now add the optional power of screen outputs from any information source to a mobile device. This significantly enhances the customer experience through increased flexibility, efficiency, and effectiveness of any self-service business application.

2. AR: What have been the greatest barriers and issues for implementing multimodal application user interfaces and why are they now becoming practical?

KG: There have been three significant barriers: the interface technology of the mobile devices themselves, the bandwidth of the available wireless networks, and the availability of software for voice/data information access and delivery.

· At a minimum, multimodal mobile devices must allow application output to be received via SMS text messaging, so the caller can act on that information and make choices during a self-service phone call. A significant majority of today’s phones have this capability.

However, with the proliferation of new, screen and text entry ”smartphones” like RIM, Motorola, Nokia, Palm, etc., users have the device technology for “real-time” multimodal applications in their hands right now. A real-time multimodal interface enables simultaneous voice and data for users to visually interact with information on a mobile device while they’re also interacting via speech and audio. According to Forrester’s 2006 research, more than 10% of U.S. mobile phone users aged 18 and 50 are already carrying smartphones, and for users aged 18-26, the penetration is 14%.

· Wireless broadband is becoming more available to handle simultaneous voice and data – with 3G networks expanding around the world and Wi-Fi and WiMax technology filling in local coverage, many more customers will soon have access to high-speed simultaneous voice and data service.

· On the software side, simultaneous voice and data transactions require the ability to run two interface versions of an application process simultaneously – one over the voice channel and one over the data (text) channel. Thanks to a new standard that’s now emerging – State Chart XML or SCXML – we have the framework needed to make simultaneous voice and visual interfaces a reality.

With the progress being made in bringing down all three barriers, we moved aggressively to adapt our technology and offer our enterprise customers new ways to differentiate their self-service applications. Not only can they further reduce costs, but they can also improve their consumer interactions, satisfaction, and brand loyalty. Enterprises can now extend their branding strategies to personal mobile devices with new services and visual advertising campaigns for brand promotion.

3. AR: Where does Intervoice see its role for supporting multimodal business applications?

KG: Intervoice has long been very successful in providing a software applications platform for designing, developing and managing voice self-service applications, as well as the expertise to help customers across all industries to implement and maintain these solutions. We are now expanding the scope of our platform to include the new multimodal capabilities and we are expanding our design and development capabilities to add the efficiencies of a visual interface. The latter includes both screen outputs as well as data (text) input to more efficiently deal with information entry and output needs of a mobile multimodal user.

· We’ve introduced and demonstrated real products for multimodal devices and services that can help our enterprise customers create and launch these new multimodal self-service applications.

· We’ve leveraged three of our recent multimodal technology patents, as well as the new SCXML standard for managing state, to deliver a next-generation Voice Portal platform that has the capability to handle rich multimodal application development. Industry analysts have acknowledged our product development as the first-to-market for voice in a multimodal application environment.

· In addition to software, Intervoice also has the in-depth implementation expertise within our 175+ person Global Consulting Services organization to do application “discovery” within our customer’s business process, identify the areas that will produce the highest ROI, and assist our customers in designing, developing and delivering new multimodal applications. Just as IVR applications required our interface design and implementation expertise, we are now training our services staff on multimodal interfaces to be ready when our enterprise customers are.

4. AR: What feedback have you gotten from enterprise customers so far?

KG: Very positive – customers are very excited and recognize the value right away. At our user conference and in individual meetings, when we show the technology in a real live demo, with a real multimodal application, customers really get it.

They immediately understand how our multimodal solution solves the age-old experiential problems of voice-only IVR applications. As an IVR application provider in many vertical segments, we have the applications knowledge base to easily move the enterprise market from traditional voice self-services to the coming world of mobile devices and multimodal interfaces.

Enterprise organizations also see our multimodal self-service approach minimizing one of the big issues that you have written about in the past. That is, reducing the need for mobile users, who are often in a noisy environment, to talk to a live agent, where there is no recognition that the caller is mobile and shouldn’t be treated like a landline caller. The last thing a busy mobile user needs is to be put into an on-hold queue that will cost them talk-time on their mobile service.

Although the technology is very new, enterprise customers are already engaging us to help them create some truly innovative multimodal applications.

5. AR: Who or what will Intervoice be competing against and what uniquely positions Intervoice to be a leader in this new market?

KG: There have been a couple of announcements by other vendors that are mobile offerings, but they are still single-streamed. Either they focus on using speech input to just add information to a form on the device or they show how an IVR voice output can also be displayed visually on the device using flash or video. No one else has launched and demonstrated a capability to develop true multimodal applications. Applications developed using the Intervoice Voice Portal uniquely allow consumers to use a combination of speech and visual interfaces simultaneously.

With our implementation of SCXML, Intervoice is the first company to have a comprehensive development environment that can manage “state,” so that within a single call, users can selectively interact in real-time with an application using a combination of speech and visual interfaces. Our approach also preserves traditional voice interactions, enabling a graceful transition from legacy voice response applications to added multimodal capabilities.

6. AR: How will your technology work in a hosted service environment?

KG: Our multimodal voice portal and the applications that run on it will work very well as hosted solutions, given that these are really just extensions of the types of voice portal applications that we host today for our customers. With server-side IP communications infrastructure, our multimodal software applications platform and customer applications can be located anywhere and managed by either the customer or our hosted solutions organization.

We believe an increasing percentage of customers will opt for a hosted solution rather than CPE-based. Due to our expertise in voice application hosting, customers already turn to Intervoice to outsource their self-service solutions because it is typically not a core competency and requires continual application changes. They also turn to Intervoice to ensure they always have the latest software technology updates and to leverage our state-of-the-art Network Operating Center (NOC) with regular 24X7 monitoring of their applications and systems. Customers tell us that we usually manage both speech or multimodal applications and the voice application platforms better and more cost efficiently than they could do themselves.

7. AR: How will the use of multimodal technology transform business and consumer-facing applications over the next five years?

KG: Multimodal applications will do to mobile devices what the Internet has done for the consumer PC. With multimodal interface capabilities, we are on the cusp of something that big for mobile devices. Multimodal interface flexibility enables mobile devices to become a personalized, “virtual” kiosk for end users to take care of a wide array of business transactions, quickly and easily, using a combination of speech and visual interfaces to meet varying day-to-day situations.

Already, Internet automation has become a way of life -- paying bills, shopping on-line; getting a boarding pass from a kiosk at the airport; making a doctor’s appointment, etc. Over the next five years, multimodal mobile interfaces will also become ubiquitous. Companies will exploit business process applications that proactively “push” relevant, time-sensitive information to customer devices in real-time.

This, in turn, will lead to self-service business applications offering visual or voice choices to help customers make time-sensitive decisions faster and with greater accuracy, regardless of the environment. And these companies will be successful because their employees, customers, subscribers, business partners, and other end users will all be accustomed to using multimodal interfaces to conduct everyday business activities by themselves. Just like the Internet, there will be a time when we won’t know how we lived without mobile, multimodal capabilities.

Comment: The Challenge For Customer Support Staffs

The availability of dynamic multimodal user interfaces will have a significant impact on automating enterprise business process applications for all its end users. The days when IVR self-services were primarily focused on customer needs because they only had telephone access to enterprise information are over. Now customers, internal staff, and business partners can all be efficiently exploiting Web-based business applications that can support both visual and speech interfaces from both multimodal devices, particularly for mobile access.

However, as we have long ago learned with IVR applications, self-services will always require access to live assistance, especially for sensitive customer contacts. The implications of multimodal application interfaces will have a ripple effect on traditional customer-facing support staff, because they too must be able to deal with customers who are both mobile and using visual rather than voice application interfaces. Preparing customer support staff (call centers) for this paradigm shift can’t really begin until more real-world experience is gained in how multimodal users with handheld mobile devices will require live assistance, with traditional voice conversation or visually with IM.

What Do You Think?

Send your comments to me at artr@ix.netcom.com.

What’s New With UC From UC Strategies

To get an idea of the different perspectives and issues related to UC technologies for the enterprise, go to the UC Strategies web site for better insights on migrating to UC.

UC Industry Update including Highlights from VoiceCon Spring

You can also review the presentations given by the UC Strategies experts at TMC’s IT Expo early this year.

Attention CIOs: Watch this great new Webcast from Avaya and Microsoft on the practical “Why’s” and “How’s” of migrating to UC!

Go to: http://cxolyris.cxomedia.com/t/833300/379459/8118/0/

This discussion with the two dominant enterprise communications technology providers in the text messaging and telephony worlds highlights the practicalities of migrating to UC and also underscores the UCStrategies.com industry-wide initiative for identifying individual business user requirements

Stay tuned to the UCStrategies.com web site for initial availability of the UC Profiling service. We are in discussions with all the major UC technology providers to support this initiative to help both their customers and sales channels plan their UC migrations properly and effectively.