Should Attention Be All We Need? The Epistemic And Ethical Implications Of Unification In Machine Learning · The Large Language Model Bible Contribute to LLM-Bible

Should Attention Be All We Need? The Epistemic And Ethical Implications Of Unification In Machine Learning

Fishman Nic, Hancox-li Leif. Arxiv 2022

[Paper]    
Applications Attention Mechanism Ethics And Bias Merging Model Architecture Pretraining Methods Transformer

“Attention is all you need” has become a fundamental precept in machine learning research. Originally designed for machine translation, transformers and the attention mechanisms that underpin them now find success across many problem domains. With the apparent domain-agnostic success of transformers, many researchers are excited that similar model architectures can be successfully deployed across diverse applications in vision, language and beyond. We consider the benefits and risks of these waves of unification on both epistemic and ethical fronts. On the epistemic side, we argue that many of the arguments in favor of unification in the natural sciences fail to transfer over to the machine learning case, or transfer over only under assumptions that might not hold. Unification also introduces epistemic risks related to portability, path dependency, methodological diversity, and increased black-boxing. On the ethical side, we discuss risks emerging from epistemic concerns, further marginalizing underrepresented perspectives, the centralization of power, and having fewer models across more domains of application

Similar Work