X

MS Project Defense - Liangqun Lu

Deep Learning for Dialogue Systems

Liangqun Lu, MS Candidate

Wednesday, Apr. 10, 2019, 2:00 pm
Dunn Hall 375 Conference Room

Committee Members:

Prof. Vasile Rus, Chair
Prof. Bernie J. Daigle, Jr.
Prof. Deepak Venugopal

Abstract

The widely applicable interactive conversational agents requires the development of intelligent dialogue systems. Natural language generation is critical in dialogue response generation and Recurrent Neural Networks (RNNs) including long short-term memory (LSTM) have been applied to tackle the task. The end-to-end sequence to sequence (seq2seq) models, in which Encoder encodes input information and Decoder generates output based on information encoding and language model, have demonstrated effectiveness in dialogue generation. Reinforcement learning implemented in seq2seq models rewards the conversation with informativity, coherence and ease of answering. Generative Adversarial Networks (GANs) that use a discriminative model to guide the training of the generative model have enjoyed considerable success in generating real-valued data. In this project, we built a LSTM-based seq2seq model for dialogue generation using pre-trained word embeddings. We applied the model on two public datasets movie dialogues and Reddit utterances, and evaluated the performance using metrics BiLingual Evaluation Understudy (BLEU) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE). We also imported the model in Python Django web framework and provided online interactive data-driven dialogue generations.