Exploring Recurrent, Memory And Attention Based Architectures For Scoring Interactional Aspects Of Human-machine Text Dialog · The Large Language Model Bible Contribute to LLM-Bible

Exploring Recurrent, Memory And Attention Based Architectures For Scoring Interactional Aspects Of Human-machine Text Dialog

Ramanarayanan Vikram, Mulholland Matthew, Ghosh Debanjan. Arxiv 2020

[Paper]    
Attention Mechanism Merging Model Architecture Pretraining Methods Reinforcement Learning Transformer

An important step towards enabling English language learners to improve their conversational speaking proficiency involves automated scoring of multiple aspects of interactional competence and subsequent targeted feedback. This paper builds on previous work in this direction to investigate multiple neural architectures – recurrent, attention and memory based – along with feature-engineered models for the automated scoring of interactional and topic development aspects of text dialog data. We conducted experiments on a conversational database of text dialogs from human learners interacting with a cloud-based dialog system, which were triple-scored along multiple dimensions of conversational proficiency. We find that fusion of multiple architectures performs competently on our automated scoring task relative to expert inter-rater agreements, with (i) hand-engineered features passed to a support vector learner and (ii) transformer-based architectures contributing most prominently to the fusion.

Similar Work