Freely Long-thinking Transformer (frailt) · The Large Language Model Bible Contribute to LLM-Bible

Freely Long-thinking Transformer (frailt)

Tabak Akbay. Arxiv 2024

[Paper]    
Model Architecture Pretraining Methods Transformer

Freely Long-Thinking Transformer (FraiLT) is an improved transformer model designed to enhance processing capabilities without scaling up size. It utilizes a recursive approach, iterating over a subset of layers multiple times, and introduces iteration encodings to maintain awareness across these cycles. Iteration encoding allows FraiLT to achieve the interpretive depth of larger models in a compact form. When evaluated on a synthetic story dataset, FraiLT outperformed larger models, showcasing its ability to deliver high-quality performance while reducing memory demands. This model represents a step forward towards more efficient and accessible language models.

Similar Work