实验室动态

[NLPCC 2022] Training Two-Stage Knowledge-Grounded Dialogues with Attention Feedback

[NLPCC 2022] Training Two-Stage Knowledge-Grounded Dialogues with Attention Feedback

Knowledge-grounded retrieval-based dialogue systems have attracted more and more attention. Among them, the two-stage dialogue models which separate the training stage into knowledge retrieving (via a retriever) and response ranking (via a ranker) are proved powerful. However, these approaches require knowledge-grounded dialogues with corresponding hand-annotated knowledge labels. Therefore, in this paper, we propose training two-stage knowledge-grounded dialogues with knowledge attention feedback from the ranker to the retriever. In each training iteration, the ranker provides knowledge attention scores as pseudo supervised feedback for the optimization of retriever.  We conduct experiments on two public data sets. The experimental results demonstrate that our proposed method is superior to the existing baselines.