Living in the era of virtual assistants

Wednesday, February 10th, 2016 at 6:30 PM

TI Auditorium

PROGRAM

6:30 - 7:00 PM Networking & Refreshments
7:00 - 8:00 PM Talks
8:00 - 8:30 PM Panel Session
8:30 - 8:45 PM Speaker Appreciation & Adjournment

Chair: MP Divakar
Organizer: MP Divakar
    

Session Abstract:Many would argue that virtual assistants like Siri, Cortana, OK Google, Moneypenny, etc., have room for improvement but a few would dispute that they are here to stay. Virtual assistants offer many advantages: aiding visually impaired for example, building brand loyalty, collecting consumer inputs to build research databases, taking voice commands to navigate in autonomous vehicles and many others. Expectations on continued improvements of virtual assistants have become the norm to the extent that fool proof, fault-tolerant and fail safe performances are now must have features.

The advances in digital/virtual assistants in the last five years have made virtual assistants more proactive, responsive and intelligent. Enabled by advances in personal and in vehicle communications technologies, improved voice recognition software have made them more conversational and integrable to our daily lives. The concept of mobility afforded by personal communication devices and enhanced by virtual assistants is now fully expected in the environs of connected / autonomous vehicles, raising the definition of mobility to its literal and functional limits!

Building on our January workshop's theme on connected and autonomous vehicles, IEEE Communications Society of Santa Clara Valley is pleased to present you an evening program of presentations on "Living in the era of Virtual Assistants." We will explore how users can benefit from engaging and using virtual assistants in the expanded ecosystem of personal mobility in today's world. We will enable discussions on whether virtual assistants can improve quality of life, help realize increased productivity, higher efficiency and improved user experience.

Speaker: Ilya Gelfenbeyn

Bio: Ilya Gelfenbeyn is the CEO and co-founder of API.AI, the conversational UX platform that 20k+ companies use to voice-enable connected devices, apps and websites. API.AI also created and powers Assistant, the largest independent voice assistant in the world with 28M users. He’s an expert in artificial intelligence, natural language processing and voice interfaces. Gelfenbeyn has a BSc in Mathematics degree from Novosibirsk State University and an MBA from the University of Brighton (UK).

Title: Building conversational user interfaces for everything

Abstract: Siri, Cortana and Alexa demonstrated that users are ready to talk to services and devices. Conversational user interfaces replace and augment GUI in different verticals, and next challenge is to provide independent software and hardware developers with tools to design and implement custom conversation scenarios for their products. Api.ai has been working on building such toolset for several years now and in this session Ilya will tell about main challenges and decisions in working on it. He will also describe important findings coming from real client implementations of conversational UX projects.

Speaker: Sayandev Mukherjee

Bio: Sayandev Mukherjee is a Senior Research Engineer in the Data Mining team of the Network Services Innovations Group at DOCOMO Innovations Inc., in Palo Alto, CA. Dr. Mukherjee received his Ph.D. from Cornell University, Ithaca, NY in 1997. He has worked at Bell Laboratories, Marvell Semiconductor Inc. and SpiderCloud Wireless Inc., has over eighty publications in journals and conferences, and has been awarded thirteen patents. He is the author of "Analytical Modeling of Heterogeneous Cellular Networks --- Geometry, Coverage, and Capacity," published by Cambridge University Press.

Title: Internal Cognition Engines -- virtual assistants in real cars

Abstract: With modern smartphones, the age of the Star Trek “tricorder,” a handheld device capable of understanding natural language, is already here. Further, the main players in the virtual assistant space are already trying to enable Star Trek’s “speak to the ship” functionality by putting intelligent voice-controlled virtual assistants into the car, thereby turning the car into another mobile platform for their software. In-car navigation systems with (poor) voice-recognition have been around for many years, but the rise of CarPlay and Android in the Car, among other such systems, has brought powerful new players into this space. It is clear that future autonomous or self-driving vehicles will employ even greater integration of natural-language recognition systems with the background routine processes responsible for the operation of these vehicles. In this talk, we discuss the utility of intelligent voice-controlled virtual assistants in enhancing the driver and passenger experience in a conventional (i.e., human-driven) vehicle. We discuss the ways in which such integration of virtual assistants into vehicles is occurring, and speculate on the way in which such integration will result in intelligent autonomous vehicles.

Speaker: MP Divakar

Title: Intro Slides