SALSA-TEXT : Self Attentive Latent Space Based Adversarial Text Generation · The Large Language Model Bible Contribute to LLM-Bible

SALSA-TEXT : Self Attentive Latent Space Based Adversarial Text Generation

Gagnon-marchand Jules, Sadeghi Hamed, Haidar Md. Akmal, Rezagholizadeh Mehdi. Canadian AI 2018

[Paper]    
Applications Attention Mechanism Language Modeling Model Architecture Pretraining Methods Security Transformer

Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent code- based schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to their promising results. In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation.

Similar Work