Skip to content

The Second AAAI Workshop on Reasoning and Learning for Human-Machine Dialogues (DEEP-DIAL 2019)

Workshop Agenda    |    Important Dates    |    Submission Guidelines    |    Workshop Committee    |    Workshop Schedule    |    Workshop Proceedings    |    Demo contest/student support

Partially sponsored by AI Journal

Natural conversation is a hallmark of intelligent systems. Unsurprisingly, dialog systems have been a key sub-area of AI for decades. Their most recent form, chatbots, which can engage people in natural conversation and are easy to build in software, have been in the news a lot lately. There are many platforms to create dialogs quickly for any domain, based on simple rules. Further, there is a mad rush by companies to release chatbots to show their AI capabilities and gain market valuation. However, beyond basic demonstration, there is little experience in how they can be designed and used for real-world applications that need decision making under constraints (e.g., sequential decision making). The workshop will thus be timely in helping chatbots realize their full potential.

Furthermore, there is an upcoming interest and need for innovation in Human-Technology-Interaction, as addressed in the context of Companion Technology. Here, the aim is to implement technical systems that smartly adapt their functionality to their users’ individual needs and requirements and are even able to solve problems in close co-operation with human users. To this end, they need to enter into a dialog and convincingly explain their suggestions and decision-making behavior.

From research side, statistical and machine learning methods are well entrenched for language understanding and entity detection. However, the wider problem of dialog management is unaddressed with mainstream tools supporting rudimentary rule-based processing. There is an urgent need to highlight the crucial role of reasoning methods such as constraints satisfaction, planning and scheduling, and learning working together with them, that can play to build an end-to-end conversation system that evolves over time. From the practical side, conversation systems need to be designed for working with people in a manner that they can explain their reasoning, convince humans about making choices among alternatives, and stand up to ethical standards demanded in real life settings.

Thus, recognizing the need for more research attention, the proposers of the current workshop organized the highly successful DEEP-DIAL18 workshop at AAAI 2018 (Photos). The event brought together over 100 AI researchers from around the world to discuss a bouquet of research topics around human-machine dialogs. The program included 4 invited talks, 7 reviewed full paper presentations and 4 lightening talks accompanied by posters, and a topical panel discussion. Some glimpses from last year can be found here.

Topics of Interest Inlcude:

Dialog Systems

  • Design considerations for dialog systems
  • Evaluation of dialog systems, metrics
  • Open domain dialog and chat systems
  • Task-oriented dialogs
  • Style, voice and personality in spoken dialogue and written text
  • Novel Methods for NL Generation for dialogs
  • Early experiences with implemented dialog systems
  • Mixed-initiative dialogs where a partner is a combination of agent and human
  • Hybrid methods 

Reasoning 

  • Domain model acquisition, especially from unstructured text
  • Plan recognition in natural conversation
  • Planning and reasoning in the context of dialog systems 
  • Handling uncertainity
  • Optimal dialog strategies

Learning

  • Learning to reason 
  • Learning for dialog management
  • End2end models for conversation
  • Explaining dialog policy 

Practical Considerations

  • Responsible chatting
  • Ethical issues with learning and reasoning in dialog systems
  • Corpora, Tools and Methodology for Dialogue Systems
  • Securing one’s chat

The intended audience students, academic researchers and practitioners with an industrial background from the AI sub-areas of dialog systems, learning, reasoning, planning, HCI, ethics and knowledge representation.

Workshop Agenda
Talks
 
Talk #1
Title: Towards smart chatbots for enhanced health: using multisensory sensing, semantic-cognitive-perceptual computing for monitoring, appraisal, adherence to intervention
 
Speaker: Prof. Amit Sheth, AAAI and IEEE Fellow, Knoesis, Wright State University, USA
 
Talk #2
Title: Towards Collaborative Dialogue
Speaker: Dr. Phil Cohen
Professor and Director
Laboratory for Dialogue Research
Faculty of Information Technology
Monash University
 
Abstract:  This talk will discuss a program of research for building collaborative dialogue systems, which are a core part of virtual assistants. I will briefly discuss the strengths and limitations of current approaches to dialogue,  including neural network-based and slot-filling approaches, but then concentrate on approaches that treat conversation as planned collaborative behaviour.  Collaborative interaction involves recognizing someone’s goals, intentions, and plans, and then performing actions to facilitate them. People have learned this basic capability at a very young age and are expected to be helpful as part of ordinary social interaction. In general, people’s plans involve both speech acts (such as requests, questions, confirmations, etc.) and physical acts. When collaborative behavior is applied to speech acts, people infer the reasons behind their interlocutor’s utterances and attempt to ensure their success. Such reasoning is apparent when an information agent answers the question “Do you know where the Sydney flight leaves?” with “Yes, Gate 8, and it’s running 20 minutes late.” It is also apparent when one asks “where is the nearest gas station?” and the interlocutor answers “2 kilometers to your right” even though it isn’t the closest, but rather the closest one that is open. In this latter case, the respondent has inferred that you want to buy gas, not just to know the location of the station. In both cases, the literal and truthful answer is not cooperative.   In order to build systems that collaborate with humans or other artificial agents, a system needs components for planning, plan recognition, and for reasoning about agents’ mental states (beliefs, desires, goals, intentions, obligations, etc.).   In this talk, I will discuss current theory and practice of such collaborative belief-desire-intention architectures, and demonstrate how they can form the basis for an advanced collaborative dialogue manager. In such an approach, systems reason about what they plan to say, and why the user said what s/he did.  Because there is a plan standing behind the system’s utterances, it is able to explain its reasoning. Finally, we will discuss potential methods for incorporating such a plan-based approach with machine-learned approaches.
 
Speaker bio:  Dr. Phil Cohen has long been engaged in the AI subfields of human-computer dialogue, multimodal interaction, and multiagent systems. He is a Fellow of the Association for the Advancement of Artificial Intelligence, and a past President of the Association for Computational Linguistics. Currently, he directs the Laboratory for Dialogue Research at Monash University.   Formerly Chief Scientist, AI and Sr. Vice President for Advanced Technology at Voicebox Technologies, he has also held positions at Adapx Inc (founder), the Oregon Graduate Institute (Professor), the Artificial Intelligence Center of SRI International (Sr. Research Scientist and Program Director, Natural Language Program), Fairchild Laboratory for Artificial Intelligence, and Bolt Bernanek and Newman. His accomplishments include co-developing influential theories of intention, collaboration, and speech acts, co-developing and deploying high-performance multimodal systems to the US Government, and conceiving and leading the project at SRI International that developed the Open Agent Architecture, which eventually became Siri. Cohen has published more than 150 refereed papers, with more than 16,900 citations, and received 7 patents. His paper with Prof. Hector Levesque “Intention is Choice with Commitment” was awarded the inaugural Influential Paper Award from the International Foundation for Autonomous Agents and Multi-Agent Systems. Most recently, he is the recipient of the 2017 Sustained Accomplishment Award from the International Conference on Multimodal Interaction. At Voicebox, Cohen led a team engaged in semantic parsing, and human-computer dialogue.  

Important Dates

Manuscripts due:                           November 5, 2018

Notification of acceptance:     November 17, 2018

Camera-ready manuscript:     November 26, 2018

Workshop:                                        January 27, 2019

Submission Guidelines 

Papers must be formatted in AAAI two-column, camera-ready style (AAAI style files are at:  http://www.aaai.org/Publications/Templates/AuthorKit18.zip).

Regular research papers, which present a significant contribution, may be no longer than 7 pages, where page 7 must contain only references, and no other text whatsoever.

Short papers, which describe a position on the topic of the workshop or a demonstration/tool, may be no longer than 4 pages, references included.

Submission Site: https://easychair.org/conferen ces/?conf=deepdial19

Organizers

Program Committee

  • Pavan Kapanipathi, IBM TJ Watson Research Center
  • Mitesh Vasa, IBM
  • Matthew Peveler, Rensselaer Polytechnic Institute
  • Q. Vera Liao, IBM
  • Madian Khabsa, Apple
  • Debdoot Mukherjee, Myntra
  • Seyyed Hadi Hashemi, University of Amsterdam
  • Sumant Kulkarni, Zenlabs, Zensar Technologies
  • Julia Kiseleva, Microsoft Research AI
  • Kyle Williams, Microsoft
  • Rahul Jha, University of Michigan
  • Srikanth Tamilselvam, IBM Global Business Services
  • Adi Botea, IBM
  • Walter Lasecki, University of Michigan, Computer Science and Engineering
  • Atriya Sen, Rensselaer Polytechnic Institute

Accepted Papers

A. Full presentations
  • Chinnadhurai Sankar and Sujith Ravi. Conditional Utterance Generation With Discrete Dialog Attributes In Open-Domain Dialog Systems
  • Parag Agrawal, Anshuman Suri and Tulasi Menon. A Trustworthy, Responsible and Interpretable System to Handle Chit-Chat in Conversational Bots http://arxiv.org/abs/1811.07600
  • Ryo Nakamura, Katsuhito Sudoh, Koichiro Yoshino and Satoshi Nakamura. Another Diversity-Promoting Objective Function for Neural Dialogue Generation,?https://arxiv.org/abs/1811.08100v1 
  • Philip Cohen. Back to the Future for Dialogue Research --  A Position Paper. 
  • Mengting Wan and Xin Chen. Beyond "How may I help you?'': Assisting Customer Service Agents with Proactive Responses, http://arxiv.org/abs/1811.10686
  • Libby Ferland, Thomas Huffstutler, Jacob Rice, Joan Zheng, Shi Ni and Maria Gini. Evaluating Older Users' Experiences with Commercial Dialogue Systems: Implications for Future Design and Development
  • Amit Sangroya, Aishwarya Chhabra and C. Anantaram. Learning Latent Beliefs and Performing Epistemic Reasoning for Efficient and Meaningful Dialog Management, http://arxiv.org/abs/1811.10238
B. Lightning Talks and Poster Presentations
  • Hisao Katsumi, Takuya Hiraoka, Koichiro Yoshino, Kazeto Yamamoto, Shota Motoura, Kunihiko Sadamasa and Satoshi Nakamura, Optimization of Information-Seeking Dialogue Strategy for Argumentation-Based Dialogue System, http://arxiv.org/abs/1811.10728
  • Teakgyu Hong, Oh-Woog Kwon and Young-Kil Kim, An End-to-End Trainable Task-oriented Dialog System with Human Feedback
  • Trung Ngo Trong and Kristiina Jokinen, What Should We Talk about? – Models for Topics, Laughter and Body Movements in First Encounters
  • Xiang Kong, Bohan Li, Graham Neubig and Eduard Hovy, An Adversarial Approach to High-Quality, Sentiment-Controlled Neural Dialogue Generation
  • Adi Botea, Christian Muise, Shubham Agarwal, Oznur Alkan, Ondrej Bajgar, Elizabeth Daly, Akihiro Kishimoto, Luis Lastras, Radu Marinescu, Josef Ondrej, Pablo Pedemonte and Miroslav Vodolan, Generating Dialogue Agents via Automated Planning

Demo contest/student support

Call for Applications for Support to Attend the Second AAAI Workshop on Reasoning and Learning for Human-Machine Dialogues (DEEP-DIAL 2019)

We invite applications from students and early researchers for support to attend the workshop. Using sponsorship from AI Journal, we will offer up to two awards and two travel support grants. 
 
There are two ways to apply:
  • [Preferred] By demonstrating a TAsk-oriented Data-driven conversation bot/agent (called TADBot  or just chatbot) that works with open data. A 1-page description needs to be submitted. See further details about demonstration below, or
  • submitting a 1-page description of your research interest relevant to conversation systems in general and the demonstration setting in particular.
Deadline: Jan 15, 2019
 
Demonstration Details: Task Oriented Chatbot using Open Data
 
Motivation
There is a long-tradition of giving momentum to new ideas by appealing to a community’s competitive spirit. We seek to promote research and best-practices in dialog by encouraging building and sharing of chatbots and facilitate participation of young researchers. 
 
Task-oriented Chatbots
Task-oriented conversation agents, that help a user look for information or complete a task, represent a large segment of chatbots that are deployed and used in practice. However, they have been largely ignored by mainstream competitions in the field [1, 2].  We will focus on chatbots that help a person find information recorded in a repository, potentially also encoding a tree-like hierarchy. Examples of such information are: a university’s catalog of courses, a hospital’s directory of staff, a product catalog [4], a transportation agency’s route network [5] and customer support FAQ. To make the data source accessible, we will focus on open data [3], i.e., the data is made available for reuse.
 
The user may be a normal person or an elderly, a child, a computer illiterate or someone with disability. The user may use single or mixture of languages, not know the exact spelling and change their intent about what they want mid-way. The aim of the chatbot is to retrieve the information the user seeks unambiguously and efficiently. Since the ground truth answer about user’s request is contained in the repository, dialogs can be evaluated for the efficiency (“ability to answer correctly, quickly”) as well as effectiveness (“to cognitive satisfaction of user”).
 
Preferred / Example Scenario:
So, consider a chatbot that helps a user find information about subway stations. We will consider the case of New York City subways where information about stations, their facilities and train routes serving them is available publicly [5]. 
 
Alternative Scenario:
Participants may alternatively submit any chatbot which uses any open dataset. 
 
Related Research
There is an active area of research on Question Answering (QA) directly from online sources and knowledge bases using learning and reasoning methods [7,8,9]. However, such systems model data but not users and their iterative, multi-turn, nature of interrogation. The initiative is in the direction of making open data more accessible to users by automating generation of conversation interfaces [10].
 
Review Criteria
  • Ability of chatbot to answer queries related to subject matter (e.g., train stations in preferred scenario)
  • Ability of chatbot to handle users of different backgrounds leading to dialogs of different lengths (e.g., exact terms, partial matches, switching intents)
  • Ability of chatbot to handle multiple turns
  • Ability to handle abusive and discriminatory language
  • Response time and error handling
  • Any special feature. E.g., Ability to handle mixture of languages, showing multi-modal response like maps or graphs when appropriate
Submission: A team may submit a 1-page with information about
  • Team name and members. Identify student members and also indicate if support is needed for student to attend DEEP-DIAL 19 workshop
  • Information about open dataset
  • A demonstration video of using the chatbot
  • Link to source code on github
  • URL of actual chatbot that can be tested
  • Details of implemented approach. Link to a paper is allowed.
Rules
  • Source code should be made available on github
  • Chatbot should be available publicly for demonstration for at least 3 months. For example, hosted on any cloud platform.
  • Data used by chatbot should be open and hence downloadable for free.
Prizes and support
  • Prize-based support: 1st prize - $700, 2nd prize - $500
  • Student support (up to two) - $ 500
References