Tab-cot: Zero-shot Tabular Chain Of Thought · The Large Language Model Bible Contribute to LLM-Bible

Tab-cot: Zero-shot Tabular Chain Of Thought

Ziqi Jin, Wei Lu. Arxiv 2023

[Paper]    
Few Shot Prompting RAG Reinforcement Learning

The chain-of-though (CoT) prompting methods were successful in various natural language processing (NLP) tasks thanks to their ability to unveil the underlying complex reasoning processes. Such reasoning processes typically exhibit implicitly structured steps. Recent efforts also started investigating methods to encourage more explicitly structured reasoning procedures to be captured. In this work, we propose Tab-CoT, a novel tabular-format CoT prompting method, which allows the complex reasoning process to be explicitly modelled in a highly structured manner. Despite its simplicity, we show that our approach is capable of performing reasoning across multiple dimensions (i.e., both rows and columns). We demonstrate our approach’s strong zero-shot and few-shot capabilities through extensive experiments on a range of reasoning tasks.

Similar Work