Seamless: Multilingual Expressive And Streaming Speech Translation · The Large Language Model Bible Contribute to LLM-Bible

Seamless: Multilingual Expressive And Streaming Speech Translation

Communication Seamless, Barrault Loïc, Chung Yu-an, Meglioli Mariano Coria, Dale David, Dong Ning, Duppenthaler Mark, Duquenne Paul-ambroise, Ellis Brian, Elsahar Hady, Haaheim Justin, Hoffman John, Hwang Min-jae, Inaguma Hirofumi, Klaiber Christopher, Kulikov Ilia, Li Pengwei, Licht Daniel, Maillard Jean, Mavlyutov Ruslan, Rakotoarison Alice, Sadagopan Kaushik Ram, Ramakrishnan Abinesh, Tran Tuan, Wenzek Guillaume, Yang Yilin, Ye Ethan, Evtimov Ivan, Fernandez Pierre, Gao Cynthia, Hansanti Prangthip, Kalbassi Elahe, Kallet Amanda, Kozhevnikov Artyom, Gonzalez Gabriel Mejia, Roman Robin San, Touret Christophe, Wong Corinne, Wood Carleigh, Yu Bokai, Andrews Pierre, Balioglu Can, Chen Peng-jen, Costa-jussà Marta R., Elbayad Maha, Gong Hongyu, Guzmán Francisco, Heffernan Kevin, Jain Somya, Kao Justine, Lee Ann, Ma Xutai, Mourachko Alex, Peloquin Benjamin, Pino Juan, Popuri Sravya, Ropers Christophe, Saleem Safiyyah, Schwenk Holger, Sun Anna, Tomasello Paden, Wang Changhan, Wang Jeff, Wang Skyler, Williamson Mary. Arxiv 2023

[Paper] [Code]    
Attention Mechanism Ethics And Bias Has Code Model Architecture Multimodal Models RAG Tools Transformer

Large-scale automatic speech translation systems today lack key features that help machine-mediated communication feel seamless when compared to human-to-human dialogue. In this work, we introduce a family of models that enable end-to-end expressive and multilingual translations in a streaming fashion. First, we contribute an improved version of the massively multilingual and multimodal SeamlessM4T model-SeamlessM4T v2. This newer model, incorporating an updated UnitY2 framework, was trained on more low-resource language data. SeamlessM4T v2 provides the foundation on which our next two models are initiated. SeamlessExpressive enables translation that preserves vocal styles and prosody. Compared to previous efforts in expressive speech research, our work addresses certain underexplored aspects of prosody, such as speech rate and pauses, while also preserving the style of one’s voice. As for SeamlessStreaming, our model leverages the Efficient Monotonic Multihead Attention mechanism to generate low-latency target translations without waiting for complete source utterances. As the first of its kind, SeamlessStreaming enables simultaneous speech-to-speech/text translation for multiple source and target languages. To ensure that our models can be used safely and responsibly, we implemented the first known red-teaming effort for multimodal machine translation, a system for the detection and mitigation of added toxicity, a systematic evaluation of gender bias, and an inaudible localized watermarking mechanism designed to dampen the impact of deepfakes. Consequently, we bring major components from SeamlessExpressive and SeamlessStreaming together to form Seamless, the first publicly available system that unlocks expressive cross-lingual communication in real-time. The contributions to this work are publicly released and accessible at https://github.com/facebookresearch/seamless_communication

Similar Work