After registering for the conference (see registratiton info below), you can find the Zoom link on the workshops page.
CASK workshop will be held virtually on October 7th with the following with the following schedule:
All times are in Pacific Time, UTC-7.
To suggest discussion topics, please send an email to email@example.com with the subject line “[CASK DISCUSS]: topic of interest”.
Conversational agents are increasingly becoming part of our day-to-day life on the phone, in the car and in the living room, performing tasks on our behalf which often require interfacing with various forms of structured knowledge. The goal of this workshop is to shed light and encourage interdisciplinary research on some of the open problems situated on the interface of dialog systems and structured knowledge (e.g., knowledge bases, tables etc.), which is critical to the development of digital assistants like Google Home, Alexa and Siri. Our hope is to help bridge the gap between academic and industry research in this area. Open questions include, but are not limited to:
Make sure to also check out the closely-related AKBC workshop on personal knowledge graphs here.
In order to participate in the workshop, we request that you register at the [AKBC 2021 registration page](https://akbc.ws/2021/registration/). There is no separate fee for workshop registration, the conference registration ($50 for non-students, $30 for students) will include all workshops.
Invited speakers: you should receive a free registration code by Sep 14. If you don't, please email the AKBC workshop co-chairs: Eunsol Choi (firstname.lastname@example.org) and Waleed Ammar (email@example.com).
We invite the submission of extended abstracts to CASK, describing (i) new unpublished or (ii) relevant recently published research in the field. The workshop will not consist of archival proceedings, and the accepted abstracts will be presented as oral (lightning talks) during the workshop and listed on the website.
To submit an abstract to CASK, please send an email to firstname.lastname@example.org with the subject line “[CASK ABSTRACT]: here-goes-your-title”. The email should include the following:
Submitted abstracts will undergo a lightweight review process to make sure they are relevant to the workshop, but no official reviews will be sent to authors. All accepted abstracts will be made available prior to the workshop. We also would make the lightning talks accessible post workshop proceedings.
If you require further assistance in participating in the workshop, please let us know and we’ll be in touch to discuss how we can best address your needs.
All deadlines are 11.59 pm UTC -12h (“anywhere on Earth”).
(1) Using Machine Translation to Localize Task Oriented NLG Output, by Scott Roy, Waymo
Abstract: One of the challenges in a task oriented natural language application like the Google Assistant, Siri, or Alexa is to localize the output to many languages. This talk explores applying machine translation to the English output. Using machine translation is very scalable, as it can work with any English output and can handle dynamic text, but otherwise the problem is a poor fit: the required quality bar is close to perfection, the range of sentences is extremely narrow, and the sentences are often very different than the ones in the machine translation training data. This combination of requirements makes the problem very different from other domain adaptation problems in machine translation. We are able to reach the required quality bar by building on existing ideas and adding new ones: fine tuning on in-domain translations, adding sentences from the Web, adding semantic annotations, using automatic error detection, and taking advantage of structured data features in addition to the English text. The talk presents the approach and results, together with a distilled data-to-text model to serve the translation models at scale.
Bio: Scott Roy is a senior AI researcher and engineering lead at Waymo, where he works on lidar perception. In his previous role at Google he worked extensively on deep learning for natural language understanding and generation, looking at problems in machine translation, data-to-text generation, and open domain chatbots. Prior to that he led the CORE project at Yahoo to optimize and personalize the content on the Yahoo home page. He has a long career at both startups and big companies developing applied ML systems.
(2) Duality Exploitation for Language Understanding and Generation, by Yun-Nung Vivian Chen, National Taiwan University
Abstract: Natural language understanding (NLU) and Natural language generation (NLG) tasks hold a strong dual relationship, where NLU aims at predicting semantic labels based on natural language utterances and NLG does the opposite. This talk describes how we can better exploit the duality to improve the performance for both models and for both training and inference stages, demonstrating the great potential of scalable systems for practical usage.
Bio: Yun-Nung (Vivian) Chen is currently an associate professor in the Department of Computer Science & Information Engineering at National Taiwan University. She earned her Ph.D. degree from Carnegie Mellon University, where her research interests focus on spoken dialogue systems, language understanding, natural language processing, and multimodality. She received Google Faculty Research Awards, Amazon AWS Machine Learning Research Awards, MOST Young Scholar Fellowship, and FAOS Young Scholar Innovation Award. Prior to joining National Taiwan University, she worked in the Deep Learning Technology Center at Microsoft Research Redmond. http://vivianchen.idv.tw/
(3) Grounding Dialogue System Responses on Unstructured Knowledge, by Dilek Hakkani-Tür, Amazon
Abstract: Incorporating unstructured knowledge from text resources into conversations is a challenging area for task-oriented and open-domain dialogue systems. Our work in this area investigates core issues for these two types of dialogue systems. For task-oriented conversations, to provide users with the appropriate responses, our approach has been detecting knowledge seeking user turns, retrieving and selecting relevant knowledge and then grounding response generation on the selected knowledge. Most of the previous work in this area has focused on written conversations due to the available datasets. However, existing state-of-the-art models trained on written data do not perform well on spoken conversations. Our recent work presents a new benchmark on spoken task-oriented conversations and shows improvements when leveraging n-best speech recognition hypotheses. Selecting the knowledge to ground the responses on is also core to open-domain dialogue systems. The existing literature on open-domain knowledge selection is limited and makes certain brittle assumptions on knowledge sources to simplify the overall task, such as the existence of a single relevant knowledge sentence per context. We evaluate the existing state of open-domain conversation knowledge selection and improve on them by proposing a new framework for collecting relevant knowledge and benchmark knowledge selection algorithms. We show that models that use the new approach outperform the models trained on standard datasets using both intrinsic evaluation and extrinsic measures of response quality.
Bio: Dilek Hakkani-Tür is a senior principal scientist at Amazon Alexa AI focusing on enabling natural dialogues with machines and a Visiting Distinguished Professor at UC Santa Cruz. Prior to joining Amazon, she was leading the dialogue research group at Google (2016-2018), a principal researcher at Microsoft Research (2010-2016), International Computer Science Institute (ICSI, 2006-2010) and AT&T Labs-Research (2001-2005). She received her BSc degree from Middle East Technical Univ, in 1994, and MSc and PhD degrees from Bilkent Univ., Department of Computer Engineering, in 1996 and 2000, respectively. Her research interests include conversational AI, natural language and speech processing, spoken dialogue systems, and machine learning for language processing. She has over 80 patents that were granted and co-authored more than 300 papers in natural language and speech processing. She received several best paper awards for publications she co-authored on conversational systems, including her earlier work on active learning for dialogue systems, from IEEE Signal Processing Society, ISCA and EURASIP. She served as an associate editor for IEEE Transactions on Audio, Speech and Language Processing (2005-2008), member of the IEEE Speech and Language Technical Committee (2009-2014), area editor for speech and language processing for Elsevier's Digital Signal Processing Journal and IEEE Signal Processing Letters (2011-2013), and served on the ISCA Advisory Council (2015-2019). She is currently the Editor-in-Chief of the IEEE/ACM Transactions on Audio, Speech and Language Processing, an IEEE Distinguished Industry Speaker (2021) and a fellow of the IEEE (2014) and ISCA (2014).
Jana Thompson (speaker), Isabella Su, Ali Hosseini
Nouha Dziri (speaker), Andrea Madotto, Osmar Zaiane, Avishek Joey Bose
Yan Xu (speaker), Etsuko Ishii, Samuel Cahyawijaya, Zihan Liu, Genta Indra Winata, Andrea Madotto, Dan Su, Pascale Fung
Sai Muralidhar Jayanthi (speaker), Varsha Embar, Karthik Raghunathan
Wolfgang Maier (speaker), Stefan Ultes
Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang (speaker), Wojciech Kryściński, Nick Schoelkopf, Riley Kong, Xiangru Tang, Murori Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev