Automated Scoring of Chatbot Responses in Conversational Dialogue (original) (raw)
2019, Lecture Notes in Electrical Engineering
Rapid advancement in natural language processing (NLP) and machine learning has led to the recent development of many chatbot systems using various algorithms. However, in a conversational dialogue setting, creating a system to communicate with humans in a meaningful and coherent manner remains a challenging task. Furthermore, it is very difficult even for humans to evaluate the responses of a chatbot system given the context of the conversation. In this paper, we will focus on the problem of automatically evaluating and scoring the quality of chatbot responses in human-chatbot dialogue settings. We propose a novel approach of combining the word representations of human and chatbot responses, and using machine learning algorithms, such as support vector machines (SVM), random forests (RF), and neural networks (NN) to learn the quality of the chatbot responses. Our experimental results show that our proposed approach is able to perform well. Recent advancement in NLP has greatly improved the quality of chatbot systems for conversational dialogues. As a result, many chatbot systems were developed recently [3, 31, 11]. Despite the recent advances in dialogue systems, creating a system to communicate with humans in a natural, coherent, and meaningful manner
Sign up for access to the world's latest research.
checkGet notified about relevant papers
checkSave papers to use in your research
checkJoin the discussion with peers
checkTrack your impact