Point-bind & Point-llm: Aligning Point Cloud With Multi-modality For 3D Understanding, Generation, And Instruction Following · The Large Language Model Bible Contribute to LLM-Bible

Point-bind & Point-llm: Aligning Point Cloud With Multi-modality For 3D Understanding, Generation, And Instruction Following

Guo Ziyu, Zhang Renrui, Zhu Xiangyang, Tang Yiwen, Ma Xianzheng, Han Jiaming, Chen Kexin, Gao Peng, Li Xianzhi, Li Hongsheng, Heng Pheng-ann. Arxiv 2023

[Paper] [Code]    
Applications Fine Tuning Has Code Pretraining Methods Reinforcement Learning Training Techniques

We introduce Point-Bind, a 3D multi-modality model aligning point clouds with 2D image, language, audio, and video. Guided by ImageBind, we construct a joint embedding space between 3D and multi-modalities, enabling many promising applications, e.g., any-to-3D generation, 3D embedding arithmetic, and 3D open-world understanding. On top of this, we further present Point-LLM, the first 3D large language model (LLM) following 3D multi-modal instructions. By parameter-efficient fine-tuning techniques, Point-LLM injects the semantics of Point-Bind into pre-trained LLMs, e.g., LLaMA, which requires no 3D instruction data, but exhibits superior 3D and multi-modal question-answering capacity. We hope our work may cast a light on the community for extending 3D point clouds to multi-modality applications. Code is available at https://github.com/ZiyuGuo99/Point-Bind_Point-LLM.

Similar Work