Research Projects

Our research mainly focuses on spoken dialog systems and conversational interfaces for more natural and efficient human-computer interaction. Specifically, we have several ongoing research projects:

DialPort: Spoken Dialog Research with Real Data

NSF-funded DialPort has 3 thrusts: Tools, Portal and Real Challenge. The Portal gives your connected dialog system a stream of user data. You can also request data that other systems have gathered. There is also a list of tools that are available to help you and your students build applications. And the Real Challenge is an effort to get younger students interested in what we do.

Portal: A Multi-Domain Dialog Platform that Groups Diverse Dialog Agents

Portal aims to connect systems produced by the spoken dialog academic research community and gives them access to real users. We introduce a distributed, multi-modal, multi-agent prototype dialog framework that affords easy integration with various remote resources, ranging from end-to-end dialog systems to external knowledge APIs. To date, the DialPort portal has one chatbot and has successfully connected to more than 10 different dialog agents, including both task-oriented and chit-chat systems.

Anyone who has or is developing a dialog system has the opportunity to become a part of the Portal community. To read more about how to connect, read our tutorial.

Read more about it and talk to our agent Skylar here.

The REAL Challenge: New Ideas for Future Dialog Systems

The REAL Challenge took place for the first time in 2014, with a long term goal of creating streams of real data that the research community can use, by fostering the creation of systems that are capable of attracting real users.

A past video presentation could be found here.

Multi-modal Distraction Detection

Distracted Driving continues to be a cause of traffic accidents despite prevailing legislation. The goal of this project is to automatically determine when the driver is becoming distracted. This information can be sent to a warning system in a car or can be used to help shut down the distracting activity. While we are able to detect distraction fairly reliably using such information as speech, gas pedal and brake use, and steering wheel trajectory, the use of automatic detection of head movement will make the distraction detector more robust. In many cases, the driver turns their head to either talk to a passenger or, more frequently, look at a smart device. In this project, we will gather data in a driving simulator from 50 subjects who are first asked to drive the course and then asked to watch the recording of their driving and tell us where they were distracted. They will use their own smart devices to listen to and dictate email. The conditions will vary with email of differing degrees of cognitive load, road conditions of varying difficulty and varying need to look at the smart device. The resulting database will be used to train and test a new and robust distraction detector.

Let’s Go!: A Spoken Dialog System For The General Public

Let’s Go! is building a spoken dialog system that can be used by the general public. While there has been success in building spoken dialog systems that can interact well with people, these systems often work only for a limited group of people. The system we are developing for Let’s Go! is designed to work with a much wider population, including groups that typically have trouble interacting with dialog systems, such as non-native English speakers and the elderly. Let’s Go! works in the domain of bus information for Pittsburgh’s Port Authority Transit bus system, providing a telephone-based interface to access bus schedules and route information.

Read more about it here.

Let’s Go has been integrated to DialPort, talk to it here!