Testing The Depth Of Chatgpt's Comprehension Via Cross-modal Tasks Based On Ascii-art: Gpt3.5's Abilities In Regard To Recognizing And Generating Ascii-art Are Not Totally Lacking · The Large Language Model Bible Contribute to LLM-Bible

Testing The Depth Of Chatgpt's Comprehension Via Cross-modal Tasks Based On Ascii-art: Gpt3.5's Abilities In Regard To Recognizing And Generating Ascii-art Are Not Totally Lacking

Bayani David. Arxiv 2023

[Paper]    
Agentic Attention Mechanism Distillation Efficiency And Optimization GPT Model Architecture Multimodal Models Reinforcement Learning

Over the eight months since its release, ChatGPT and its underlying model, GPT3.5, have garnered massive attention, due to their potent mix of capability and accessibility. While a niche-industry of papers have emerged examining the scope of capabilities these models possess, the information fed to and extracted from these networks has been either natural language text or stylized, code-like language. Drawing inspiration from the prowess we expect a truly human-level intelligent agent to have across multiple signal modalities, in this work we examine GPT3.5’s aptitude for visual tasks, where the inputs feature content provided as ASCII-art without overt distillation into a lingual summary. We conduct experiments analyzing the model’s performance on image recognition tasks after various transforms typical in visual settings, trials investigating knowledge of image parts, and tasks covering image generation.

Similar Work