The Second AAAI Workshop on Reasoning and Learning for Human-Machine Dialogues (DEEP-DIAL 2019)

Workshop Agenda    |    Important Dates    |    Submission Guidelines    |    Workshop Committee    |    Workshop Schedule    |    Workshop Proceedings    |    Demo contest/student support


Partially sponsored by AI Journal

Natural conversation is a hallmark of intelligent systems. Unsurprisingly, dialog systems have been a key sub-area of AI for decades. Their most recent form, chatbots, which can engage people in natural conversation and are easy to build in software, have been in the news a lot lately. There are many platforms to create dialogs quickly for any domain, based on simple rules. Further, there is a mad rush by companies to release chatbots to show their AI capabilities and gain market valuation. However, beyond basic demonstration, there is little experience in how they can be designed and used for real-world applications that need decision making under constraints (e.g., sequential decision making). The workshop will thus be timely in helping chatbots realize their full potential.


Furthermore, there is an upcoming interest and need for innovation in Human-Technology-Interaction, as addressed in the context of Companion Technology. Here, the aim is to implement technical systems that smartly adapt their functionality to their users’ individual needs and requirements and are even able to solve problems in close co-operation with human users. To this end, they need to enter into a dialog and convincingly explain their suggestions and decision-making behavior.


From research side, statistical and machine learning methods are well entrenched for language understanding and entity detection. However, the wider problem of dialog management is unaddressed with mainstream tools supporting rudimentary rule-based processing. There is an urgent need to highlight the crucial role of reasoning methods such as constraints satisfaction, planning and scheduling, and learning working together with them, that can play to build an end-to-end conversation system that evolves over time. From the practical side, conversation systems need to be designed for working with people in a manner that they can explain their reasoning, convince humans about making choices among alternatives, and stand up to ethical standards demanded in real life settings.


Thus, recognizing the need for more research attention, the proposers of the current workshop organized the highly successful DEEP-DIAL18 workshop at AAAI 2018 (Photos). The event brought together over 100 AI researchers from around the world to discuss a bouquet of research topics around human-machine dialogs. The program included 4 invited talks, 7 reviewed full paper presentations and 4 lightening talks accompanied by posters, and a topical panel discussion. Some glimpses from last year can be found here.


Topics of Interest Inlcude:


Dialog Systems

  • Design considerations for dialog systems
  • Evaluation of dialog systems, metrics
  • Open domain dialog and chat systems
  • Task-oriented dialogs
  • Style, voice and personality in spoken dialogue and written text
  • Novel Methods for NL Generation for dialogs
  • Early experiences with implemented dialog systems
  • Mixed-initiative dialogs where a partner is a combination of agent and human
  • Hybrid methods 



  • Domain model acquisition, especially from unstructured text
  • Plan recognition in natural conversation
  • Planning and reasoning in the context of dialog systems 
  • Handling uncertainity
  • Optimal dialog strategies


  • Learning to reason 
  • Learning for dialog management
  • End2end models for conversation
  • Explaining dialog policy

Practical Considerations

  • Responsible chatting
  • Ethical issues with learning and reasoning in dialog systems
  • Corpora, Tools and Methodology for Dialogue Systems
  • Securing one’s chat

The intended audience students, academic researchers and practitioners with an industrial background from the AI sub-areas of dialog systems, learning, reasoning, planning, HCI, ethics and knowledge representation.

Workshop Agenda
Talk #1
Title: Towards smart chatbots for enhanced health: using multisensory sensing, semantic-cognitive-perceptual computing for monitoring, appraisal, adherence to intervention
Speaker: Prof. Amit Sheth, AAAI and IEEE Fellow, Knoesis, Wright State University, USA
Talk #2
Title: Towards Collaborative Dialogue
Speaker: Dr. Phil Cohen
Professor and Director
Laboratory for Dialogue Research
Faculty of Information Technology
Monash University
Abstract:  This talk will discuss a program of research for building collaborative dialogue systems, which are a core part of virtual assistants. I will briefly discuss the strengths and limitations of current approaches to dialogue,  including neural network-based and slot-filling approaches, but then concentrate on approaches that treat conversation as planned collaborative behaviour.  Collaborative interaction involves recognizing someone’s goals, intentions, and plans, and then performing actions to facilitate them. People have learned this basic capability at a very young age and are expected to be helpful as part of ordinary social interaction. In general, people’s plans involve both speech acts (such as requests, questions, confirmations, etc.) and physical acts. When collaborative behavior is applied to speech acts, people infer the reasons behind their interlocutor’s utterances and attempt to ensure their success. Such reasoning is apparent when an information agent answers the question “Do you know where the Sydney flight leaves?” with “Yes, Gate 8, and it’s running 20 minutes late.” It is also apparent when one asks “where is the nearest gas station?” and the interlocutor answers “2 kilometers to your right” even though it isn’t the closest, but rather the closest one that is open. In this latter case, the respondent has inferred that you want to buy gas, not just to know the location of the station. In both cases, the literal and truthful answer is not cooperative.     In order to build systems that collaborate with humans or other artificial agents, a system needs components for planning, plan recognition, and for reasoning about agents’ mental states (beliefs, desires, goals, intentions, obligations, etc.).   In this talk, I will discuss current theory and practice of such collaborative belief-desire-intention architectures, and demonstrate how they can form the basis for an advanced collaborative dialogue manager. In such an approach, systems reason about what they plan to say, and why the user said what s/he did.  Because there is a plan standing behind the system’s utterances, it is able to explain its reasoning. Finally, we will discuss potential methods for incorporating such a plan-based approach with machine-learned approaches.
Speaker bio:  Dr. Phil Cohen has long been engaged in the AI subfields of human-computer dialogue, multimodal interaction, and multiagent systems. He is a Fellow of the Association for the Advancement of Artificial Intelligence, and a past President of the Association for Computational Linguistics. Currently, he directs the Laboratory for Dialogue Research at Monash University.   Formerly Chief Scientist, AI and Sr. Vice President for Advanced Technology at Voicebox Technologies, he has also held positions at Adapx Inc (founder), the Oregon Graduate Institute (Professor), the Artificial Intelligence Center of SRI International (Sr. Research Scientist and Program Director, Natural Language Program), Fairchild Laboratory for Artificial Intelligence, and Bolt Bernanek and Newman. His accomplishments include co-developing influential theories of intention, collaboration, and speech acts, co-developing and deploying high-performance multimodal systems to the US Government, and conceiving and leading the project at SRI International that developed the Open Agent Architecture, which eventually became Siri. Cohen has published more than 150 refereed papers, with more than 16,900 citations, and received 7 patents. His paper with Prof. Hector Levesque “Intention is Choice with Commitment” was awarded the inaugural Influential Paper Award from the International Foundation for Autonomous Agents and Multi-Agent Systems. Most recently, he is the recipient of the 2017 Sustained Accomplishment Award from the International Conference on Multimodal Interaction. At Voicebox, Cohen led a team engaged in semantic parsing, and human-computer dialogue.  

Important Dates

Manuscripts due:                           November 5, 2018

Notification of acceptance:     November 17, 2018

Camera-ready manuscript:     November 26, 2018

Workshop:                                        January 27, 2019


Submission Guidelines 

Papers must be formatted in AAAI two-column, camera-ready style (AAAI style files are at:

Regular research papers, which present a significant contribution, may be no longer than 7 pages, where page 7 must contain only references, and no other text whatsoever.

Short papers, which describe a position on the topic of the workshop or a demonstration/tool, may be no longer than 4 pages, references included.

Submission Site: ces/?conf=deepdial19




Program Committee

  • Pavan Kapanipathi, IBM TJ Watson Research Center
  • Mitesh Vasa, IBM
  • Matthew Peveler, Rensselaer Polytechnic Institute
  • Q. Vera Liao, IBM
  • Madian Khabsa, Apple
  • Debdoot Mukherjee, Myntra
  • Seyyed Hadi Hashemi, University of Amsterdam
  • Sumant Kulkarni, Zenlabs, Zensar Technologies
  • Julia Kiseleva, Microsoft Research AI
  • Kyle Williams, Microsoft
  • Rahul Jha, University of Michigan
  • Srikanth Tamilselvam, IBM Global Business Services
  • Adi Botea, IBM
  • Walter Lasecki, University of Michigan, Computer Science & Engineering
  • Atriya Sen, Rensselaer Polytechnic Institute


Accepted Papers

A. Full presentations
  • Chinnadhurai Sankar and Sujith Ravi. Conditional Utterance Generation With Discrete Dialog Attributes In Open-Domain Dialog Systems
  • Parag Agrawal, Anshuman Suri and Tulasi Menon. A Trustworthy, Responsible and Interpretable System to Handle Chit-Chat in Conversational Bots
  • Ryo Nakamura, Katsuhito Sudoh, Koichiro Yoshino and Satoshi Nakamura. Another Diversity-Promoting Objective Function for Neural Dialogue Generation,? 
  • Philip Cohen. Back to the Future for Dialogue Research --  A Position Paper. 
  • Mengting Wan and Xin Chen. Beyond "How may I help you?'': Assisting Customer Service Agents with Proactive Responses,
  • Libby Ferland, Thomas Huffstutler, Jacob Rice, Joan Zheng, Shi Ni and Maria Gini. Evaluating Older Users' Experiences with Commercial Dialogue Systems: Implications for Future Design and Development
  • Amit Sangroya, Aishwarya Chhabra and C. Anantaram. Learning Latent Beliefs and Performing Epistemic Reasoning for Efficient and Meaningful Dialog Management,
B. Lightning Talks and Poster Presentations
  • Hisao Katsumi, Takuya Hiraoka, Koichiro Yoshino, Kazeto Yamamoto, Shota Motoura, Kunihiko Sadamasa and Satoshi Nakamura, Optimization of Information-Seeking Dialogue Strategy for Argumentation-Based Dialogue System,
  • Teakgyu Hong, Oh-Woog Kwon and Young-Kil Kim, An End-to-End Trainable Task-oriented Dialog System with Human Feedback
  • Trung Ngo Trong and Kristiina Jokinen, What Should We Talk about? – Models for Topics, Laughter and Body Movements in First Encounters
  • Xiang Kong, Bohan Li, Graham Neubig and Eduard Hovy, An Adversarial Approach to High-Quality, Sentiment-Controlled Neural Dialogue Generation
  • Adi Botea, Christian Muise, Shubham Agarwal, Oznur Alkan, Ondrej Bajgar, Elizabeth Daly, Akihiro Kishimoto, Luis Lastras, Radu Marinescu, Josef Ondrej, Pablo Pedemonte and Miroslav Vodolan, Generating Dialogue Agents via Automated Planning